我想从LPCM原始文件中提取通道音频,即提取立体声LPCM文件的左右声道。 LPCM是16位深度,交错,2个通道,litle端。从我收集的字节的顺序是{LeftChannel,RightChannel,LeftChannel,RightChannel ...},因为它是16位深度,每个通道将有2个字节的样本对吗?
所以我的问题是如果我想提取左声道然后我会取0,2,4,6 ... n * 2地址中的字节?而右边的通道是1,3,4,......(n * 2 + 1)。
在提取音频通道后,我应该将提取的通道的格式设置为16位深度,1通道吗?
提前致谢
这是我目前用来从AssetReader中提取PCM音频的代码。这段代码可以正常编写音乐文件而不提取其频道,所以我可能是由格式或其他东西引起的......
NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
// [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [[AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
retain];
if (assetError) {
NSLog (@"error: %@", assetError);
return;
}
AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
audioSettings: outputSettings]
retain];
if (! [assetReader canAddOutput: assetReaderOutput]) {
NSLog (@"can't add reader output... die!");
return;
}
[assetReader addOutput: assetReaderOutput];
NSArray *dirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [dirs objectAtIndex:0];
//CODE TO SPLIT STEREO
[self setupAudioWithFormatMono:kAudioFormatLinearPCM];
NSString *splitExportPath = [[documentsDirectoryPath stringByAppendingPathComponent:@"monoleft.caf"] retain];
if ([[NSFileManager defaultManager] fileExistsAtPath:splitExportPath]) {
[[NSFileManager defaultManager] removeItemAtPath:splitExportPath error:nil];
}
AudioFileID mRecordFile;
NSURL *splitExportURL = [NSURL fileURLWithPath:splitExportPath];
OSStatus status = AudioFileCreateWithURL(splitExportURL, kAudioFileCAFType, &_streamFormat, kAudioFileFlags_EraseFile,
&mRecordFile);
NSLog(@"status os %d",status);
[assetReader startReading];
CMSampleBufferRef sampBuffer = [assetReaderOutput copyNextSampleBuffer];
UInt32 countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(@"number of samples %d",countsamp);
SInt64 countByteBuf = 0;
SInt64 countPacketBuf = 0;
UInt32 numBytesIO = 0;
UInt32 numPacketsIO = 0;
NSMutableData * bufferMono = [NSMutableData new];
while (sampBuffer) {
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize);
//Append mono left to buffer data
for (int i=0; i<audioBuffer.mDataByteSize; i= i+4) {
[bufferMono appendBytes:(audioBuffer.mData+i) length:2];
}
//the number of bytes in the mutable data containing mono audio file
numBytesIO = [bufferMono length];
numPacketsIO = numBytesIO/2;
NSLog(@"numpacketsIO %d",numPacketsIO);
status = AudioFileWritePackets(mRecordFile, NO, numBytesIO, &_packetFormat, countPacketBuf, &numPacketsIO, audioBuffer.mData);
NSLog(@"status for writebyte %d, packets written %d",status,numPacketsIO);
if(numPacketsIO != (numBytesIO/2)){
NSLog(@"Something wrong");
assert(0);
}
countPacketBuf = countPacketBuf + numPacketsIO;
[bufferMono setLength:0];
}
sampBuffer = [assetReaderOutput copyNextSampleBuffer];
countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(@"number of samples %d",countsamp);
}
AudioFileClose(mRecordFile);
[assetReader cancelReading];
[self performSelectorOnMainThread:@selector(updateCompletedSizeLabel:)
withObject:0
waitUntilDone:NO];
audiofileservices的输出格式如下:
_streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 1;
_streamFormat.mBytesPerPacket = 2;
_streamFormat.mBytesPerFrame = 2;// (_streamFormat.mBitsPerChannel / 8) * _streamFormat.mChannelsPerFrame;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100.0;
_packetFormat.mStartOffset = 0;
_packetFormat.mVariableFramesInPacket = 0;
_packetFormat.mDataByteSize = 2;
答案 0 :(得分:4)
听起来几乎正确 - 你有16位深度,这意味着每个样本将占用2个字节。这意味着左声道数据将以字节{0,1},{4,5},{8,9}等为基础。交错意味着样本是交错的,而不是字节。 除此之外,我会尝试一下,看看你的代码是否有任何问题。
提取音频后也是如此 频道,我应该设置格式 提取的通道为16位深度 ,1个频道?
提取后,只剩下两个通道中的一个,所以是的,这是正确的。
答案 1 :(得分:1)
我有一个类似的错误,音频听起来“慢”,原因是你指定mChannelsPerFrame为1,而你有双声道声音。将其设置为2,它应该加快播放速度。还要告诉你是否在执行此操作后输出'声音'正确...:)
答案 2 :(得分:0)
我试图将立体声音频分成两个单声道文件(split stereo audio to mono streams on iOS)。我一直在使用您的代码,但似乎无法使用它。什么是setupAudioWithFormatMono方法的内容?