我正在使用AVCaptureSession
来使用视频和音频输入,并使用AVAssetWriter
对H.264视频进行编码。
如果我不写音频,视频会按预期编码。但如果我写音频,我会收到一个损坏的视频。
如果我检查提供给CMSampleBuffer
的音频AVAssetWriter
,则会显示以下信息:
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x17410ba30 [0x1b3a70bb8]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
FormatList Array: {(null)}
}
extensions: {(null)}
由于它提供了lpcm音频,我已经为AVAssetWriterInput
配置了这个声音设置(我尝试了一个和两个声道):
var channelLayout = AudioChannelLayout()
memset(&channelLayout, 0, MemoryLayout<AudioChannelLayout>.size);
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Mono
let audioOutputSettings:[String: Any] = [AVFormatIDKey as String:UInt(kAudioFormatLinearPCM),
AVNumberOfChannelsKey as String:1,
AVSampleRateKey as String:44100.0,
AVLinearPCMIsBigEndianKey as String:false,
AVLinearPCMIsFloatKey as String:false,
AVLinearPCMBitDepthKey as String:16,
AVLinearPCMIsNonInterleaved as String:false,
AVChannelLayoutKey: NSData(bytes:&channelLayout, length:MemoryLayout<AudioChannelLayout>.size)]
self.assetWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioOutputSettings)
self.assetWriter.add(self.assetWriterAudioInput)
当我使用上面的lpcm设置时,我无法使用任何应用程序打开视频。我尝试过使用kAudioFormatMPEG4AAC
和kAudioFormatAppleLossless
但我仍然收到了损坏的视频,但我可以使用QuickTime Player 8(而不是QuickTime Player 7)观看视频,但是对于视频的持续时间感到困惑视频没有播放声音。
录音完成后,我打电话:
func endRecording(_ completionHandler: @escaping () -> ()) {
isRecording = false
assetWriterVideoInput.markAsFinished()
assetWriterAudioInput.markAsFinished()
assetWriter.finishWriting(completionHandler: completionHandler)
}
这是AVCaptureSession
的配置方式:
func setupCapture() {
captureSession = AVCaptureSession()
if (captureSession == nil) {
fatalError("ERROR: Couldnt create a capture session")
}
captureSession?.beginConfiguration()
captureSession?.sessionPreset = AVCaptureSessionPreset1280x720
let frontDevices = AVCaptureDevice.devices().filter{ ($0 as AnyObject).hasMediaType(AVMediaTypeVideo) && ($0 as AnyObject).position == AVCaptureDevicePosition.front }
if let captureDevice = frontDevices.first as? AVCaptureDevice {
do {
let videoDeviceInput: AVCaptureDeviceInput
do {
videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
}
catch {
fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).")
}
guard (captureSession?.canAddInput(videoDeviceInput))! else {
fatalError()
}
captureSession?.addInput(videoDeviceInput)
}
}
do {
let audioDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
let audioDeviceInput: AVCaptureDeviceInput
do {
audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice)
}
catch {
fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).")
}
guard (captureSession?.canAddInput(audioDeviceInput))! else {
fatalError()
}
captureSession?.addInput(audioDeviceInput)
}
do {
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_32BGRA]
dataOutput.alwaysDiscardsLateVideoFrames = true
let queue = DispatchQueue(label: "com.3DTOPO.videosamplequeue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
guard (captureSession?.canAddOutput(dataOutput))! else {
fatalError()
}
captureSession?.addOutput(dataOutput)
videoConnection = dataOutput.connection(withMediaType: AVMediaTypeVideo)
}
do {
let audioDataOutput = AVCaptureAudioDataOutput()
let queue = DispatchQueue(label: "com.3DTOPO.audiosamplequeue")
audioDataOutput.setSampleBufferDelegate(self, queue: queue)
guard (captureSession?.canAddOutput(audioDataOutput))! else {
fatalError()
}
captureSession?.addOutput(audioDataOutput)
audioConnection = audioDataOutput.connection(withMediaType: AVMediaTypeAudio)
}
captureSession?.commitConfiguration()
// this will trigger capture on its own queue
captureSession?.startRunning()
}
AVCaptureVideoDataOutput
委托方法:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// func captureOutput(captureOutput: AVCaptureOutput, sampleBuffer: CMSampleBuffer, connection:AVCaptureConnection) {
var error: CVReturn
if (connection == audioConnection) {
delegate?.audioSampleUpdated(sampleBuffer: sampleBuffer)
return
}
// ... Write video buffer ...//
}
哪个电话:
func audioSampleUpdated(sampleBuffer: CMSampleBuffer) {
if (isRecording) {
while !assetWriterAudioInput.isReadyForMoreMediaData {}
if (!assetWriterAudioInput.append(sampleBuffer)) {
print("Unable to write to audio input");
}
}
}
如果我停用上面的assetWriterAudioInput.append()
来电,则视频不会损坏,但我当然没有音频编码。如何使视频和音频编码都能正常工作?
答案 0 :(得分:1)
我明白了。我将s()
源时间设置为0,然后从当前assetWriter.startSession
中减去开始时间以写入像素数据。
我将CACurrentMediaTime()
来源时间更改为assetWriter.startSession
,并且在编写视频帧时不会减去当前时间。
旧的开始会话代码:
CACurrentMediaTime()
有效的新代码:
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)