我正在 Learning Core Audio 中查看核心音频转换服务,我在sample code中对此示例感到震惊:
while(1)
{
// wrap the destination buffer in an AudioBufferList
AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = mySettings->outputFormat.mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer;
UInt32 frameCount = packetsPerBuffer;
// read from the extaudiofile
CheckResult(ExtAudioFileRead(mySettings->inputFile,
&frameCount,
&convertedData),
"Couldn't read from input file");
if (frameCount == 0) {
printf ("done reading from file");
return;
}
// write the converted data to the output file
CheckResult (AudioFileWritePackets(mySettings->outputFile,
FALSE,
frameCount,
NULL,
outputFilePacketPosition / mySettings->outputFormat.mBytesPerPacket,
&frameCount,
convertedData.mBuffers[0].mData),
"Couldn't write packets to file");
// advance the output file write location
outputFilePacketPosition += (frameCount * mySettings->outputFormat.mBytesPerPacket);
}
注意frameCount
如何定义为packetsPerBuffer
.. packetsPerBuffer
在此处定义:
UInt32 outputBufferSize = 32 * 1024; // 32 KB is a good starting point
UInt32 sizePerPacket = mySettings->outputFormat.mBytesPerPacket;
UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;
AudioFileWritePackets
AudioFileWritePackets中第三个和第五个参数被定义为:{/ p>
inNumBytes 的 正在写入的音频数据的字节数。
ioNumPackets 的 在输入时,指向要写入的数据包的指针。在输出时,指向实际写入的数据包的指针..
但在代码中这两个参数都给出了frameCount ..这怎么可能?我知道PCM数据1帧= 1包:
// define the ouput format. AudioConverter requires that one of the data formats be LPCM
audioConverterSettings.outputFormat.mSampleRate = 44100.0;
audioConverterSettings.outputFormat.mFormatID = kAudioFormatLinearPCM;
audioConverterSettings.outputFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioConverterSettings.outputFormat.mBytesPerPacket = 4;
audioConverterSettings.outputFormat.mFramesPerPacket = 1;
audioConverterSettings.outputFormat.mBytesPerFrame = 4;
audioConverterSettings.outputFormat.mChannelsPerFrame = 2;
audioConverterSettings.outputFormat.mBitsPerChannel = 16;
但是相同的lPCM格式也明确指出每个数据包有4个字节(=每帧4个字节)..
这是怎么回事? (这同样适用于使用AudioConverterFillComplexBuffer
而不是ExtAudioFileRead
的同一章中的另一个示例,并使用数据包而不是帧...但它是相同的事情)
答案 0 :(得分:2)
我认为你是对的,根据AudioFile.h
头文件中的定义,AudioFileWritePackets
应该将被写入的音频数据的字节数作为第三个参数,并在该学习核心中音频示例framecount
变量定义为数据包数,而不是字节数。
我尝试了这些示例并获得了与(framecount * 4)
,0
甚至-1
完全相同的输出作为AudioFileWritePackets
函数调用的第三个参数。所以对我来说,似乎该函数不能完全按照.h文件中的定义工作(不需要第三个参数),并且在该示例中,本书的作者也没有注意到这个错误 - 我可能是错了。