Drawing recording audio waveform in iOS

translate

Ribbon waveform diagram

Linear waveform diagram

Configure an audio session

AVAudioSession needs to be configured before drawing waveform diagram, and an array needs to be established to save volume data.

Related attributes

RecorderSetting is used to set recording quality and other related data.

Timer and update frequency are used to update the waveform chart periodically.

SoundMeter and soundMeterCount are used to save the volume table array.

RecordTime is used to record the recording time, and can be used to judge whether the recording time meets the requirements and so on.

///recorder

Private var recorder: AVAudioRecorder! ///Video Recorder Settings

Private message recorder setting = [avsampleratekey: ns number (value: float (44100.0)),//sound sampling rate.

Avformatidkey: ns number (value: int 32(kaudioformatmpg 4 AAC)),//encoding format.

Avnumberofchannelskey: ns number (value: 1),//collect audio tracks.

Avencodeaudio quality key: ns number (value: int 32 (avaudio quality. Medium.raw value)]//Sound quality

///Recording timer

Private var timer: timer? ///waveform update interval

Update frequency of private letters = 0.05

///sound data array

Private var soundMeters: [Float]! ///Sound data array capacity

Private letter soundMeterCount = 10

///Recording time

Private var recording time = 0.00

Configuration related to audio session

Configuring AVAudioSession is used to configure avaudiosession, where AVAudioSessionCategoryRecord means to record only this session. If you need to play it, you can set it to avaudiesessioncategoryplayandrecord or avaudiesessioncategoryplayblack. The difference between the two is that one can be recorded and played, and the other can be played in the background (that is, voice can still be played after mute).

ConfigRecord is used to configure the entire AVAudioRecoder, including permission acquisition, proxy source setting, whether to record the volume table, etc.

DirectoryURL is the location where the configuration file is saved.

private func configAVAudioSession(){ let session = avaudiosession . shared instance()do { try session . set category(avaudiosessioncategoryplayandrerecord,with:。 DefaultToSpeaker)} catch {print ("session configuration failed")}

}

private func config record(){ avaudiosession . shared instance()。 RequestRecordPermission in {(Allow)

If! Allow {return

}

} let session = avaudiosession . shared instance()do { try session . set category(AVAudioSessionCategoryPlayAndRecord,with:。 defaultToSpeaker)} catch { print(" session config failed ")} do { self . recorder = try AVAudioRecorder(URL:self . directory yurl()! ,settings:self . recorder setting)self . recorder . delegate = self

self . recorder . preparetorecord()self . recorder . ismeteringenabled = true

} catch {print(error. localization description)

} do { try avaudiosession . shared instance()。 SetActive(true)} catch {print ("session activity failed")}

}

private func directory URL()-& gt; Website? {//Do something ...

Returns the sound file URL.

}

Recording audio data

After recording, we use the timer just configured to continuously obtain the average power and save it in the array.

UpdateMeters is called by the timer, and the volume data recorded in the recorder is continuously saved into the soundMeter array.

AddSoundMeter is used to add data.

Private function update counter () {

recorder.updateMeters()

Recording time+= update frequency

add soundmeter(item:recorder . average power(for channel:0))

}

private func add soundmeter(item:Float){ if soundmeters . count & lt; soundMeterCount {

SoundMeters.append (project)

} else { for (index,_)in soundmeters . enumerated(){ if index & lt; soundMeterCount - 1 {

Soundfinder [index] = Soundfinder [index+1]

}

}//Insert new data

Soundmeters [soundmetercount-1] = itemnotificationcenter.default.post (name: NSNotification. Name.init("updateMeters "),object: soundMeters)

}

}

Start drawing a waveform diagram

Now that we have obtained all the necessary data, we can start to draw the waveform diagram. At this time, we will go to the MCVolumeView.swift file. In the previous step, we sent a notice called updateMeters to inform MCVolumeView to update the waveform diagram.

Override init (frame: cgrect) {super.init (frame: frame).

backgroundColor = UIColor.clear

Content mode =. Repaint? //The content mode is redraw, because the volume table needs to be redrawn many times.

notification center . default . add observer(self,selector:# selector(update view(notice:)),name: NSNotification。 Name.init("updateMeters "),object: nil)

}

@ objc private func update view(notice:Notification){

soundMeters = notice.object as! [floating]

setNeedsDisplay()

}

When setNeedsDisplay is called, the drawRect method will be called, where we can draw a waveform diagram.

NoVoice and maxVolume are used to ensure the display range of sound.

The waveform diagram is drawn by CGContext, and of course it can also be drawn by UIBezierPath.

Override funcdraw (_ rect: cgrect) {ifsoundmeters! = zero & amp & soundmeters. count & gt0 {let context = uigraphicsgetcurrentcontext ()

Context? . setLineCap(。 Circular)

Context? . setLineJoin(。 Circular)

Context? . setStrokeColor(ui color . white . CG color)

Let noVoice = -46.0 // This value means that all sounds below -46.0 are considered silent.

Let maxVolume = 55.0 // This value represents that the maximum sound is 55.0.

//Draw the volume ...

Context? . strokePath()

}

}

Drawing of columnar waveform diagram

Calculate the height of each column according to maxVolume and noVoice, and move the point where the context is located to draw.

In addition, it should be noted that the coordinate points in CGContext are inverse, so the calculation requires the inverse coordinate axis to calculate.

Keith. Bar:?

Context? . setLineWidth(3)? For (an index, an item) to be in a phonograph. Enumeration () {let bar height = max volume-(double (item)-no voice)//Calculate the tone table height that should be displayed through the current tone table.

Context? . Move (to: CGPoint(x: index * 6+3, y: 40))

Context? . addLine(to: CGPoint(x: index * 6 + 3,y: Int(barHeight)))

}

Drawing of linear waveform diagram

The calculation method of "height" is the same as that of strip, but when drawing strip waveform, the line is drawn first and then moved, while when drawing strip waveform, the line is moved first and then drawn.

Keith. Line:

Context? . Set linewidth (1.5) for (index, item) in soundmeters. Enumeration () {let position = max volume-(double (item)-no voice)//Calculate the height of the corresponding line segment.

Context? . addLine(to:CG point(x:Double(index * 6+3),y: position))

Context? . Move (to: CGPoint(x: Double(index * 6+3), y: position))

}

}

Further improve our waveform diagram.

In many cases, recording not only needs to display the waveform diagram, but also needs us to display the current recording time and progress, so we can add a recording progress bar to the waveform diagram, so we turn to the MCProgressView.swift file for operation.

Use UIBezierPath to draw with CAShapeLayer.

MaskPath is the mask of the whole progress path, because our recording HUD is not a regular square, so we need to use the mask progress path to cut it.

ProgressPath is the progress path, and the drawing method of progress is to draw it from left to right in turn.

Animation is the drawing animation of the progress path.

private func config animate(){ let mask path = UIBezierPath(rounded rect:CG rect . init(x:0,y: 0,width: frame.width,height: frame.height),corner radius:HUDCornerRadius)let mask layer = CAShapeLayer()

mask layer . background color = ui color . clear . CG color

maskLayer.path = maskPath.cgPath

maskLayer.frame = bounds

//progress path

/*

The center of the path is the center of the HUD, and the width is the height of the HUD, drawn from left to right.

*/

let progressPath = CGMutablePath()

progress path . move(to:CG point(x:0,y: frame.height / 2))

progress path . addline(to:CG point(x:frame . width,y: frame.height / 2))

progressLayer = CAShapeLayer()

progressLayer.frame = bounds

Progresslayer.fillcolor = uicolor.clear.cgcolor//Layer background color.

Progresslayer.stroke color = uicolor (red: 0.29, green: 0.29, blue: 0.29, alpha: 0.90). cgColor? //Layer drawing color

progress layer . line cap = kCALineCapButt

progress layer . line width = hud height

progress layer . path = progress path

progressLayer.mask = maskLayer

animation = cabasic animation(keyPath:" stroke end ")

Animation.duration = 60 // Maximum recording duration

Animation. Timing function = camediatiming function (name: kcamediating functional linear)//marching at a constant speed.

animation . fill mode = kCAFillModeForwards

animation.fromValue = 0.0

animation.toValue = 1.0

animation.autoreverses = false

animation.repeatCount = 1

}

label

The above are some of my experiences and opinions when drawing recording waveforms. In the demonstration, I also added Gaussian blur and shadow to the recording HUD to make it more textured in the display, so I will skip it. Even so, I think this recording HUD still has some defects. First, the coupling degree with VC is relatively high. Second, the effect of drawing linear waveform diagram is not reasonable. If there is a better way, I hope everyone can communicate with me.