AudioKit:如何在Objective-C中使用AKPlayer playAt方法?

时间:2018-12-25 17:31:04

标签: objective-c audiokit

我想在iOS应用中使用Objective-C在给定的短暂延迟后同步启动多个AKPlayer。

我在AudioKit的源代码中找到了以下快速代码,该文件是文件AKTiming.swift:

let bufferDuration = AKSettings.ioBufferDuration
let referenceTime = AudioKit.engine.outputNode.lastRenderTime ?? AVAudioTime.now()
let startTime = referenceTime + bufferDuration
for node in nodes {
   node.start(at: startTime)
}

如何在NSTimeInterval参数中使用给定的缓冲区持续时间在目标c中执行类似的操作。

不幸的是,在目标c中无法添加带有AVAudioTime变量的referenceTime + bufferDuration之类的对象,并且now()方法也不存在。

Apples documentation of the AVAudioTime class很短,对我不是很有帮助。

我可以使用静态方法hostTimeForSeconds将NSTimeInterval转换为hostTime,然后使用timeWithHostTime创建AVAudioTime实例吗?

谢谢您的帮助!

Matthias

2 个答案:

答案 0 :(得分:0)

  

我可以使用静态方法hostTimeForSeconds将NSTimeInterval转换为hostTime,然后使用timeWithHostTime创建AVAudioTime实例吗?

是的!

如果要处理lastRenderTime为NULL的情况,则还需要#import <mach/mach_time.h>并使用mach_absolute_time

double bufferDuration = AKSettings.ioBufferDuration;
AVAudioTime *referenceTime = AudioKit.engine.outputNode.lastRenderTime ?: [[AVAudioTime alloc] initWithHostTime:mach_absolute_time()];
uint64_t startHostTime = referenceTime.hostTime + [AVAudioTime hostTimeForSeconds:bufferDuration];
AVAudioTime *startTime = [[AVAudioTime alloc] initWithHostTime:startHostTime];

for (AKPlayer *node in nodes) {
    [node startAt:startTime];
}

答案 1 :(得分:0)

  

不幸的是,在目标c中无法添加带有AVAudioTime变量的referenceTime + bufferDuration之类的对象,并且now()方法也不存在。

它们不存在,因为它不是AVAudioTime的一部分,它是AudioKit的扩展。

看看他们的source code,您会发现:

// An AVAudioTime with a valid hostTime representing now.
public static func now() -> AVAudioTime {
    return AVAudioTime(hostTime: mach_absolute_time())
}

/// Returns an AVAudioTime offset by seconds.
open func offset(seconds: Double) -> AVAudioTime {

    if isSampleTimeValid && isHostTimeValid {
        return AVAudioTime(hostTime: hostTime + seconds / ticksToSeconds,
                           sampleTime: sampleTime + AVAudioFramePosition(seconds * sampleRate),
                           atRate: sampleRate)
    } else if isHostTimeValid {
        return AVAudioTime(hostTime: hostTime + seconds / ticksToSeconds)
    } else if isSampleTimeValid {
        return AVAudioTime(sampleTime: sampleTime + AVAudioFramePosition(seconds * sampleRate),
                           atRate: sampleRate)
    }
    return self
}

public func + (left: AVAudioTime, right: Double) -> AVAudioTime {
    return left.offset(seconds: right)
}

您也可以自己实现这些扩展。我认为您无法在Objective C中实现+运算符,因此您只需要使用offset方法即可。 (注意:我没有检查以下内容)

double ticksToSeconds() {
    struct mach_timebase_info tinfo;
    kern_return_t err = mach_timebase_info(&tinfo);
    double timecon = (double)(tinfo.numer) / (double)(tinfo.denom);
    return timecon * 0.000000001;
}

@interface AVAudioTime (Extensions)

+ (AVAudioTime *)now;
- (AVAudioTime *)offsetWithSeconds:(double)seconds;

@end

@implementation AVAudioTime (Extensions)

+ (AVAudioTime *)now {
    return [[AVAudioTime alloc] initWithHostTime:mach_absolute_time()];
}

- (AVAudioTime *)offsetWithSeconds:(double)seconds {
    if ([self isSampleTimeValid] && [self isHostTimeValid]) {
        return [[AVAudioTime alloc] initWithHostTime:self.hostTime + (seconds / ticksToSeconds())
                                          sampleTime:self.sampleTime + (seconds * self.sampleRate)
                                              atRate:self.sampleRate];
    }
    else if ([self isHostTimeValid]) {
        return [[AVAudioTime alloc] initWithHostTime:self.hostTime + (seconds / ticksToSeconds())];
    }
    else if ([self isSampleTimeValid]) {
        return [[AVAudioTime alloc] initWithSampleTime:self.sampleTime + (seconds * self.sampleRate)
                                                atRate:self.sampleRate];
    }
    return self;
}

@end