我想写一个非常基本的应用程序,将音频从麦克风传递到扬声器。使用py https://people.csail.mit.edu/hubert/pyaudio/中描述的pyaudio非常简单。
def passthrough():
WIDTH = 2
CHANNELS = 1
RATE = 44100
p = pyaudio.PyAudio()
def callback(in_data, frame_count, time_info, status):
return (in_data, pyaudio.paContinue)
stream = p.open(format=p.get_format_from_width(WIDTH),
channels=CHANNELS,
rate=RATE,
input=True,
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
p.terminate()
但是现在我尝试在事件发生时将波形文件混合到此流中。这就是我现在被困住的地方。播放波形文件似乎也很容易。
def play_wave(wav_file):
wf = wave.open(wav_file, 'rb')
sample_width=wf.getsampwidth()
channels=wf.getnchannels()
rate=wf.getframerate()
second=sample_width*channels*rate
def callback(in_data, frame_count, time_info, status):
data = wf.readframes(frame_count)
return (data, pyaudio.paContinue)
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(sample_width),
channels=channels,
rate=int(rate),
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
wf.close()
p.terminate()
目前,我有两个问题。
希望有人可以点亮我现在所处的黑暗地下室。
编辑:假设波形文件具有相同数量的通道和相同的速率,因此无需转换。
答案 0 :(得分:0)
将throughput()函数移动到一个线程后,它按预期工作。当我昨天尝试这个时,我只是搞乱了线程启动(在
所以这里是完整的,有效的代码。
import pyaudio
import wave
import threading
import time
class AudioPass(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
self.passthrough()
def passthrough(self):
WIDTH = 2
CHANNELS = 1
RATE = 44100
p = pyaudio.PyAudio()
def callback(in_data, frame_count, time_info, status):
return (in_data, pyaudio.paContinue)
stream = p.open(format=p.get_format_from_width(WIDTH),
channels=CHANNELS,
rate=RATE,
input=True,
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
p.terminate()
def play_wave(wav_file):
wf = wave.open(wav_file, 'rb')
sample_width=wf.getsampwidth()
channels=wf.getnchannels()
rate=wf.getframerate()
second=sample_width*channels*rate
def callback(in_data, frame_count, time_info, status):
data = wf.readframes(frame_count)
return (data, pyaudio.paContinue)
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(sample_width),
channels=channels,
rate=int(rate),
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
wf.close()
p.terminate()
thread = AudioPass()
thread.start()
play_wave('C:/bell.wav')
稍后我还会尝试另一种方法,如果它也做得好,我也会把它作为替代方案。使用线程方式很好,因为我可以为流和wav文件使用不同的速率。
答案 1 :(得分:0)
一位同事提供了以下解决方案,这是一种非常原始的方法,但它有效并且有助于理解这个pyaudio的工作原理。
import time
import pyaudio
import numpy
WIDTH = 2
CHANNELS = 1
RATE = 44100
p = pyaudio.PyAudio()
SINE_WAVE_FREQUENCY = 440.0 # In Hz
SINE_WAVE_DURATION = 5.0 # In seconds
SINE_WAVE_VOLUME = 0.5
SINE_WAVE = (numpy.sin(2 * numpy.pi * numpy.arange(RATE * SINE_WAVE_DURATION) * SINE_WAVE_FREQUENCY / RATE)).astype(numpy.float32) * SINE_WAVE_VOLUME
def loopback(in_data, frame_count, time_info, status):
return (in_data, pyaudio.paContinue)
stream = p.open(format=p.get_format_from_width(WIDTH), channels=CHANNELS, rate=RATE, input=True, output=True, stream_callback=loopback)
stream.start_stream()
def playsine():
sinestream = p.open(format=pyaudio.paFloat32, channels=1, rate=RATE, output=True)
sinestream.write(SINE_WAVE)
sinestream.stop_stream()
sinestream.close()
while True:
input("Press enter to play a sine wave")
playsine()