从iOS 12.4 beta版本开始,在appendSampleBuffer
上调用AVAssetWriterInput
会记录以下错误:
CMSampleBufferGetSampleSize表示err = -12735(kCMSampleBufferError_BufferHasNoSampleSizes)(sbuf-> numSampleSizeEntries == 0),位于/BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia/BufferSpectrumSample 4153
我们没有在以前的版本中看到此错误,也没有在iOS 13 beta中看到此错误。 还有其他人遇到这种情况,并且可以提供任何信息来帮助我们解决此问题吗?
更多详细信息
我们的应用程序正在使用两个AVAssetWriterInput
对象记录视频和音频,其中一个对象用于视频输入(附加像素缓冲区),另一个对象用于音频输入-附加使用CMSampleBufferCreate
创建的音频缓冲区。 (请参见下面的代码。)
由于我们的音频数据是非交织的,因此在创建之后,我们会将其转换为交织格式,然后将其传递给appendSampleBuffer
。
相关代码
// Creating the audio buffer:
CMSampleBufferRef buff = NULL;
CMSampleTimingInfo timing = {
CMTimeMake(1, _asbdFormat.mSampleRate),
currentAudioTime,
kCMTimeInvalid };
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
0,
NULL,
&buff);
// checking for error... (non returned)
// Converting from non-interleaved to interleaved.
float zero = 0.f;
vDSP_vclr(_interleavedABL.mBuffers[0].mData, 1, numFrames * 2);
// Channel L
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, _interleavedABL.mBuffers[0].mData, 2, numFrames);
// Channel R
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, (float*)(_interleavedABL.mBuffers[0].mData) + 1, 2, numFrames);
_interleavedABL.mBuffers[0].mDataByteSize = _interleavedASBD.mBytesPerFrame * numFrames;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
&_interleavedABL);
// checking for error... (non returned)
if (_assetWriterAudioInput.readyForMoreMediaData) {
BOOL success = [_assetWriterAudioInput appendSampleBuffer:audioBuffer]; // THIS PRODUCES THE ERROR.
// success is returned true, but the above specified error is logged - on iOS 12.4 betas (not on 12.3 or before)
}
在此之前,_assetWriterAudioInput
的创建方式如下:
-(BOOL) initializeAudioWriting
{
BOOL success = YES;
NSDictionary *audioCompressionSettings = // settings dictionary, see below.
if ([_assetWriter canApplyOutputSettings:audioCompressionSettings forMediaType:AVMediaTypeAudio]) {
_assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
_assetWriterAudioInput.expectsMediaDataInRealTime = YES;
if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
[_assetWriter addInput:_assetWriterAudioInput];
}
else {
// return error
}
}
else {
// return error
}
return success;
}
audioCompressionSettings定义为:
+ (NSDictionary*)audioSettingsForRecording
{
AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
double preferredHardwareSampleRate;
if ([sharedAudioSession respondsToSelector:@selector(sampleRate)])
{
preferredHardwareSampleRate = [sharedAudioSession sampleRate];
}
else
{
preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
}
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
return @{
AVFormatIDKey: @(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: @2,
AVSampleRateKey: @(preferredHardwareSampleRate),
AVChannelLayoutKey: [ NSData dataWithBytes: &acl length: sizeof( acl ) ],
AVEncoderBitRateKey: @160000
};
}
appendSampleBuffer
记录以下错误并调用堆栈(相关部分):
CMSampleBufferGetSampleSize发出信号err = -12735(kCMSampleBufferError_BufferHasNoSampleSizes)(sbuf-> numSampleSizeEntries == 0),位于/BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia/Buffer: 4153
0 CoreMedia 0x00000001aff75194 CMSampleBufferGetSampleSize + 268 [0x1aff34000 + 266644]
1我的应用0x0000000103212dfc-[MyClassName writeAudioFrames:audioBuffers:] + 1788 [0x102aec000 + 7499260] ...
任何帮助将不胜感激。
编辑:添加以下信息:
我们将0和NULL传递给numSampleSizeEntries
的{{1}}和sampleSizeArray
参数-根据文档,这是创建非交织数据缓冲区时必须传递的参数(尽管该文档对我有点困惑)。
我们尝试传递1和一个指向size_t参数的指针,例如:
CMSampleBufferCreate
但没有帮助: 它记录了以下错误:
figSampleBufferCheckDataSize表示错误err = -12731(kFigSampleBufferError_RequiredParameterMissing)(bbuf与sbuf数据大小不匹配)
并且我们不清楚应该有什么值(如何知道每个样本的样本大小), 还是这根本不是正确的解决方案。
答案 0 :(得分:0)
我认为我们有答案:
传递 numSampleSizeEntries 和 sampleSizeArray 如下所示的 CMSampleBufferCreate 参数似乎可以解决该问题(仍然需要完整的验证)。
据我所知,原因是我们最后要添加交错缓冲区,它需要具有样本大小(至少在12.4版中)。
// _asbdFormat is the AudioStreamBasicDescription.
size_t sampleSize = _asbdFormat.mBytesPerFrame;
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
1,
&sampleSize,
&buff);
答案 1 :(得分:0)
此错误表示传递给 CMBlockBufferCreate...
和 CMSampleBufferCreate...
函数的数据长度参数不匹配。