如何从Python中的频谱图中获取音符(频率和时间)?

时间:2017-11-05 16:43:52

标签: python frequency spectrogram

我正在尝试用Python创建一个程序,我可以从该程序上传音乐文件并从该文件中获取笔记(在钢琴上)。我创建了一个Spectrogram,现在如何从中获取频率?如何修复频谱图(从频谱图的一半我有镜像反射)?我需要像this这样的东西。 Here是我的代码。

import numpy as np
from matplotlib import pyplot as plt
import scipy.io.wavfile as wav
from numpy.lib import stride_tricks

""" short time fourier transform of audio signal """
def stft(sig, frameSize, overlapFac=0.5, window=np.hanning):
    win = window(frameSize)
    hopSize = int(frameSize - np.floor(overlapFac * frameSize))

    # zeros at beginning (thus center of 1st window should be for sample nr. 0)
    samples = np.append(np.zeros(np.floor(frameSize/2.0)), sig)    
    # cols for windowing
    cols = np.ceil((len(samples) - frameSize) / float(hopSize)) + 1
    # zeros at end (thus samples can be fully covered by frames)
    samples = np.append(samples, np.zeros(frameSize))

    frames = stride_tricks.as_strided(samples, shape=(cols, frameSize), strides=(samples.strides[0]*hopSize, samples.strides[0])).copy()
    frames *= win

    return np.fft.rfft(frames)    

""" scale frequency axis logarithmically """    
def logscale_spec(spec, sr=44100, factor=20.):
    timebins, freqbins = np.shape(spec)

    scale = np.linspace(0, 1, freqbins) ** factor
    scale *= (freqbins-1)/max(scale)
    scale = np.unique(np.round(scale))

    # create spectrogram with new freq bins
    newspec = np.complex128(np.zeros([timebins, len(scale)]))
    for i in range(0, len(scale)):
       if i == len(scale)-1:
           newspec[:,i] = np.sum(spec[:,scale[i]:], axis=1)
       else:        
           newspec[:,i] = np.sum(spec[:,scale[i]:scale[i+1]], axis=1)

    # list center freq of bins
    allfreqs = np.abs(np.fft.fftfreq(freqbins*2, 1./sr)[:freqbins+1])
    freqs = []
    for i in range(0, len(scale)):
        if i == len(scale)-1:
            freqs += [np.mean(allfreqs[scale[i]:])]
        else:
            freqs += [np.mean(allfreqs[scale[i]:scale[i+1]])]

    return newspec, freqs

""" plot spectrogram"""
def plotstft(audiopath, binsize=2**10, plotpath=None, colormap="jet"):
    samplerate, samples = wav.read(audiopath)
    s = stft(samples, binsize)

    sshow, freq = logscale_spec(s, factor=1.0, sr=samplerate)
    ims = 20.*np.log10(np.abs(sshow)/10e-6) # amplitude to decibel

    timebins, freqbins = np.shape(ims)

    plt.figure(figsize=(15, 7.5))
    plt.imshow(np.transpose(ims), origin="lower", aspect="auto", cmap=colormap, interpolation="none")
    plt.colorbar()

    plt.xlabel("time (s)")
    plt.ylabel("frequency (Hz)")
    plt.xlim([0, timebins-1])
    plt.ylim([0, freqbins])

    xlocs = np.float32(np.linspace(0, timebins-1, 5))
    plt.xticks(xlocs, ["%.02f" % l for l in ((xlocs*len(samples)/timebins)+(0.5*binsize))/samplerate])
    ylocs = np.int16(np.round(np.linspace(0, freqbins-1, 10)))
    plt.yticks(ylocs, ["%.02f" % freq[i] for i in ylocs])

    if plotpath:
        plt.savefig(plotpath, bbox_inches="tight")
    else:
        plt.show()

    plt.clf()

plotstft("Sound/piano2.wav")

1 个答案:

答案 0 :(得分:0)

您描述的音频转录问题是音乐信息检索(MIR)研究社区中众所周知的问题。它不是一个容易解决的问题,它包含两个方面:

  • 检测音调频率,由于谐波的出现而常常很难以及音符经常被滑入的事实(C#可以被检测而不是C),这也是由于调谐差异造成的。

  • 节拍检测:音频表演通常不及时播放,因此找到实际的开始可能会很棘手。

一种有前途的新方法是使用深度神经网络来解决这个问题,例如:

Boulanger-Lewandowski,N.,Bengio,Y。,& Vincent,P。(2012)。 Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription。 arXiv preprint arXiv:1206.6392。

更多信息:

Poliner,G.E.,Ellis,D.P.,Ehmann,A.F.,Gómez,E.,Streich,S。,& Ong,B。(2007年)。来自音乐音频的旋律转录:方法和评估。 IEEE Transactions on Audio,Speech and Language Processing,15(4),1247-1256。