使用AKAudioFile缓冲区的AudioKit更新图并显示完整波形

时间:2018-08-21 14:35:46

标签: ios audiokit ezaudio

我有个阴谋

self.plot = EZAudioPlot()
self.plot?.frame = CGRect(x:0, y:0, width:Int(self.plotWidth!), height:Int(self.plotOutlet!.bounds.height))
self.plot?.plotType = EZPlotType.buffer
self.plot?.shouldFill = true
self.plot?.shouldMirror = true
self.plot?.shouldOptimizeForRealtimePlot = true
self.plot?.color = UIColor.white
self.plot?.backgroundColor = UIColor(red:0,green:0,blue:0,alpha:0)
self.plot?.gain = 1
self.plotOutlet?.addSubview(self.plot!)

,我想通过从AKAudioFile复制的缓冲区更新它:

self.plot?.updateBuffer(file.pcmBuffer.floatChannelData![0], withBufferSize: file.pcmBuffer.frameLength)

这说: pcmBuffer:265:错误无法读取IntBuffer

我尝试了另一种方法,首先将缓冲区转换为EZAudioFloatData,但我无法正确进行类型转换:

let data = EZAudioFloatData(numberOfChannels: 2, buffers: file.pcmBuffer.floatChannelData, bufferSize: file.pcmBuffer.frameLength)

检查器说:无法转换类型'UnsafePointer >'的值”到预期的参数类型'UnsafeMutablePointer ? >!'

我在做什么错? 顺便说一句:我知道我可以使用EZAudioFile()加载wav文件,然后使用.getWaveformData()获取缓冲区,但是出于几个原因,我想知道是否可以使用AKAudioFile来执行相同的操作。

---更新:

我能够将波形特征直接添加到AKWaveTableDSPKernel.hpp:文件:

EZAudioFloatData* getWaveformData(UInt32 numberOfPoints){

if (numCh == 0 || current_size < numberOfPoints){
    // prevent division by zero
    return nil;
}

EZAudioFloatData *waveformData;

float **data = (float **)malloc( sizeof(float*) * numCh );
for (int i = 0; i < numCh; i++)
{
    data[i] = (float *)malloc( sizeof(float) * numberOfPoints );
}

// calculate the required number of frames per buffer
SInt64 framesPerBuffer = ((SInt64) current_size / numberOfPoints);

// read through file and calculate rms at each point
for (SInt64 i = 0; i < numberOfPoints; i++){

    for (int channel = 0; channel < numCh; channel++){

        float channelData[framesPerBuffer];
        for (int frame = 0; frame < framesPerBuffer; frame++)
        {
            if(channel == 0){
                channelData[frame] = ftbl1->tbl[i * framesPerBuffer + frame];
            }else{
                channelData[frame] = ftbl2->tbl[i * framesPerBuffer + frame];
            }
        }
        float rms = [EZAudioUtilities RMS:channelData length:(UInt32)framesPerBuffer];
        data[channel][i] = rms;

    }

}

waveformData = [EZAudioFloatData dataWithNumberOfChannels:numCh buffers:(float **)data bufferSize:(UInt32)numberOfPoints];

// cleanup
for (int i = 0; i < numCh; i++){
    free(data[i]);
}

free(data);

return waveformData;

}

它只需要通用的头绑定。我将尽快尝试使用此功能进行拉取请求。

0 个答案:

没有答案