我正在努力使用pyaudio进行实时信号处理; 从麦克风输入声音,并同时输出经过信号处理的音频。 我不知道将ndarray转换为缓冲区以正确输出音频。 这是带有注释的代码:
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=44100,
input=True,
output=True,
frames_per_buffer=2048)
for i in range(0, 100):
data = stream.read(CHUNK) #get data with 'bytes' type
indata = np.frombuffer(data, dtype='int16')/32768.0 #convert data to ndarray
processed = audio_processing(indata) #do some process
p_bytes = (processed*32768.0).astype('int16') #here is the thing.
output = stream.write(p_bytes)
我希望在stream.write()中正确识别p_bytes 预先谢谢你。