是否将CNN应用于短时傅立叶变换?

时间:2019-05-23 21:39:49

标签: python-3.x conv-neural-network fft

因此,我有一个代码可以返回.wav文件的短时傅立叶变换频谱。我希望能够占用一毫秒的频谱,并在其上训练CNN。

我不太确定该如何实现。我知道如何格式化图像数据以将其输入到CNN中,以及如何训练网络,但是我对如何获取FFT数据并将其分成较小的时间段一无所知。

FFT代码(对不起,超长代码):


rate, audio = wavfile.read('scale_a_lydian.wav')

audio = np.mean(audio, axis=1)

N = audio.shape[0]
L = N / rate

M = 1024

# Audio is 44.1 Khz, or ~44100 samples / second
# window function takes 1024 samples or 0.02 seconds of audio (1024 / 44100 = ~0.02 seconds)
# and shifts the window 100 over each time
# so there would end up being (total_samplesize - 1024)/(100) total steps done (or slices)

slices = util.view_as_windows(audio, window_shape=(M,), step=100) #slices overlap

win = np.hanning(M + 1)[:-1]
slices = slices * win #each slice is 1024 samples (0.02 seconds of audio)

slices = slices.T #transpose matrix -> make each column 1024 samples (ie. make each column one slice)


spectrum = np.fft.fft(slices, axis=0)[:M // 2 + 1:-1] #perform fft on each slice and then take the first half of each slice, and reverse

spectrum = np.abs(spectrum) #take absolute value of slices

# take SampleSize * Slices
# transpose into slices * samplesize
# Take the first row -> slice * samplesize
# transpose back to samplesize * slice (essentially get 0.01s of spectrum)

spectrum2 = spectrum.T
spectrum2 = spectrum2[:1]
spectrum2 = spectrum2.T

以下输出FFT频谱:

N = spectrum2.shape[0]
L = N / rate

f, ax = plt.subplots(figsize=(4.8, 2.4))

S = np.abs(spectrum2)
S = 20 * np.log10(S / np.max(S))

ax.imshow(S, origin='lower', cmap='viridis',
          extent=(0, L, 0, rate / 2 / 1000))
ax.axis('tight')
ax.set_ylabel('Frequency [kHz]')
ax.set_xlabel('Time [s]');
plt.show()

(随时纠正我在评论中提出的任何理论错误)

因此,据我了解,我有一个numpy数组(频谱),每列是一个包含510个样本的切片(切成两半,因为每个FFT切片的一半是多余的(没用?)),每个样本都有频率列表?

编辑:因此,上面的代码理论上将0.01s的音频作为频谱输出,这正是我所需要的。这是真的,还是我想的不对?

0 个答案:

没有答案