AVAssetReader寻求

时间:2011-05-21 16:56:24

标签: iphone audio

我需要寻找一个音频文件并拔出块。我正在尝试使用AVAssetReader。我看到的错误是如果我从不同的偏移量读取一段时间内的音频,我得到的平均值(块)是不同的。

例如,如果我正在读取0.1s到0.5s的音频,我会得到不同的,如果我从0.2到0.5s读取,我收到的块是不同的

以下是演示它的代码示例

#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#import <MediaPlayer/MediaPlayer.h>

+ (void) test
{
    NSURL* path = [[NSBundle mainBundle] URLForResource:@"music" withExtension:@"mp3"];

    [self test:path sample:1 showChunks:5];
    [self test:path sample:2 showChunks:4];
    [self test:path sample:3 showChunks:3];
}

+(void) test:(NSURL*) url sample:(NSInteger) sample showChunks:(NSInteger) chunkCount
{
#define CHUNK 800
#define SAMPLE_RATE 8000
    AVURLAsset* asset = [AVURLAsset URLAssetWithURL:url options:nil];
    NSError *assetError = nil;
    AVAssetReader* assetReader = [AVAssetReader assetReaderWithAsset:asset error:&assetError];

    CMTime startTime = CMTimeMake(sample*CHUNK, SAMPLE_RATE);
    CMTimeShow(startTime);

    CMTimeRange timeRange = CMTimeRangeMake(startTime, kCMTimePositiveInfinity);
    assetReader.timeRange = timeRange;

    NSDictionary* dict = nil;
    dict = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInteger:SAMPLE_RATE], AVSampleRateKey, [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, nil];


    AVAssetReaderAudioMixOutput* assetReaderOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:asset.tracks audioSettings: dict];
    if (! [assetReader canAddOutput: assetReaderOutput]) {
        NSLog (@"error: Cannot add output reader");
        assetReader = nil;
        return;
    }

    [assetReader addOutput: assetReaderOutput];

    [assetReader startReading];

    CMSampleBufferRef nextBuffer;

    if (!(nextBuffer = [assetReaderOutput copyNextSampleBuffer]))
    {
        return;
    }
    CMSampleBufferGetTotalSampleSize (nextBuffer);
    // Extract bytes from buffer
    CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(nextBuffer);

    NSInteger len = CMBlockBufferGetDataLength(dataBuffer);
    if (len < chunkCount*CHUNK)
    {
        printf("CHUNK is to big");
        return;
    }
    UInt8* buf = malloc(len);
    CMBlockBufferCopyDataBytes(dataBuffer, 0, len, buf);

    for (int ii = 0; ii < chunkCount*CHUNK; ii+=CHUNK)
    {
        CGFloat av = 0;
        for (int jj = 0; jj < CHUNK; jj++)
        {
            av += (CGFloat) buf[jj+ii];
        }
        printf("Time: %f av: %f\n", (CGFloat)(ii+CHUNK*sample)/(CGFloat)SAMPLE_RATE,  av/(CGFloat)CHUNK);
    }
    printf("\n");

    free(buf);


}

这是输出

{800/8000 = 0.100}
Time: 0.100000 av: 149.013748
Time: 0.200000 av: 100.323753
Time: 0.300000 av: 146.991257
Time: 0.400000 av: 106.763748
Time: 0.500000 av: 145.020004

{1600/8000 = 0.200}
Time: 0.200000 av: 145.011246
Time: 0.300000 av: 110.718750
Time: 0.400000 av: 154.543747
Time: 0.500000 av: 112.025002

{2400/8000 = 0.300}
Time: 0.300000 av: 149.278748
Time: 0.400000 av: 104.477501
Time: 0.500000 av: 158.162506

请帮助

2 个答案:

答案 0 :(得分:6)

在我看来,问题在于假设以下代码准确地寻求startTime:

CMTimeRange timeRange = CMTimeRangeMake(startTime, kCMTimePositiveInfinity);
assetReader.timeRange = timeRange;

您可以使用

调用来测试此内容

CMSampleBufferGetOutputPresentationTimeStamp(nextBuffer);

通过此,您将能够看到缓冲区启动的确切时间(以秒为单位)。

答案 1 :(得分:0)

根据我自己的经验寻求

assetReader.timeRange = CMTimeRangeMake(CMTimeMake(sample, sample_rate), kCMTimePositiveInfinity)

完美无缺。寻求没有精确的问题。

你可能遇到的是淡入问题:实际上AVAssetReader似乎在前1024个样本中淡出(可能多一点)。 我通过在我真正想要读取的位置之前读取1024个样本来修复它,然后跳过1024个样本。