我正在尝试从8分钟长的海豚声学波中形成频谱图,并将其另存为图像文件。目标是利用以下论文中提到的降噪和提取算法:https://pdfs.semanticscholar.org/9cbf/d5b23600d4976f27ea454329f183b1fe6166.pdf
基本思想是,海豚咔嗒声几乎总是表示为频谱图上的垂直宽带尖峰,而口哨声则沿对角线弯曲。该算法查找峰值强度的像素,然后查看相邻像素。如果像素邻域是垂直的,则它们将被衰减,如果不是,则它们将被维护。
此代码源自http://www.frank-zalkow.de/en/code-snippets/create-audio-spectrograms-with-python.html?ckattempt=1,因为我喜欢在plotstft def中易于调整binsize因子。
代码创建了一个频谱图,可以放大并找到足够清晰的信号,但是当我查看形成的.png时,缩放时的分辨率不足。口哨的长度通常为0.4-1.3秒,而每次点击的时间约为0.1秒。因此,我需要能够放大到足以看到适合这些长度的时间的分辨率。
此外,我正在尝试找出如何过滤所有> = 5000hz和<= 16000 hz的频率,但是我不太确定要在代码中进行哪些调整。
我已经将fig1.savefig('Spectrogram.png',dpi = 500)调整为增加dpi。我在1000 dpi时遇到内存错误。
我也将def plotstft(audiopath,binsize = 2xx12,plotpath = None,colormap =“ Greys”)中的binsize增加到2xx12,当我使用图中的缩放功能时,它提供了很好的分辨率。
用于过滤频率的AS。我一直在尝试找出哪个数组包含频率本身,以便可以从数组中删除超出限制的频率,但是我运气不太好。
import numpy as np
from matplotlib import pyplot as plt
import scipy.io.wavfile as wav
from numpy.lib import stride_tricks
def stft(sig, frameSize, overlapFac=0.5, window=np.hanning):
win = window(frameSize)
hopSize = int(frameSize - np.floor(overlapFac * frameSize))
# zeros at beginning (thus center of 1st window should be for sample nr. 0)
samples = np.append(np.zeros(int(np.floor(frameSize/2.0))), sig)
# cols for windowing
cols = np.ceil( (len(samples) - frameSize) / float(hopSize)) + 1
# zeros at end (thus samples can be fully covered by frames)
samples = np.append(samples, np.zeros(frameSize))
frames = stride_tricks.as_strided(samples, shape=(int(cols), frameSize), strides=(samples.strides[0]*hopSize, samples.strides[0])).copy()
frames *= win
return np.fft.rfft(frames)
def logscale_spec(spec, sr=44100, factor=20.):
timebins, freqbins = np.shape(spec)
scale = np.linspace(0, 1, freqbins) ** factor
scale *= (freqbins-1)/max(scale)
scale = np.unique(np.round(scale))
# create spectrogram with new freq bins
newspec = np.complex128(np.zeros([timebins, len(scale)]))
for i in range(0, len(scale)):
if i == len(scale)-1:
newspec[:,i] = np.sum(spec[:,int(scale[i]):], axis=1)
else:
newspec[:,i] = np.sum(spec[:,int(scale[i]):int(scale[i+1])], axis=1)
# list center freq of bins
allfreqs = np.abs(np.fft.fftfreq(freqbins*2, 1./sr)[:freqbins+1])
freqs = []
fmin = 5000
fmax = 16000
for i in range(0, len(scale)):
if i == len(scale)-1:
freqs += [np.mean(allfreqs[int(scale[i]):])]
else:
freqs += [np.mean(allfreqs[int(scale[i]):int(scale[i+1])])]
return newspec, freqs
def plotstft(audiopath, binsize=2**12, plotpath=None, colormap="Greys"):
samplerate, samples = wav.read(audiopath)
s = stft(samples, binsize)
sshow, freq = logscale_spec(s, factor=1.0, sr=samplerate)
ims = 20.*np.log10(np.abs(sshow)/10e-6) # amplitude to decibel
timebins, freqbins = np.shape(ims)
print("timebins: ", timebins)
print("freqbins: ", freqbins)
plt.figure(figsize=(15, 7.5))
plt.imshow(np.transpose(ims), origin="lower", aspect="auto", cmap=colormap, interpolation="none")
plt.colorbar()
plt.xlabel("time (s)")
plt.ylabel("frequency (hz)")
plt.xlim([0, timebins-1])
plt.ylim([0, freqbins])
xlocs = np.float32(np.linspace(0, timebins-1, 5))
plt.xticks(xlocs, ["%.02f" % l for l in ((xlocs*len(samples)/timebins)+(0.5*binsize))/samplerate])
ylocs = np.int16(np.round(np.linspace(0, freqbins-1, 10)))
plt.yticks(ylocs, ["%.02f" % freq[i] for i in ylocs])
if plotpath:
plt.savefig(plotpath, bbox_inches="tight")
else:
fig1 = plt.gcf()
plt.show()
fig1.savefig('Spectrogram.png', dpi=500)
plt.clf()
return ims
ims = plotstft('8min.wav')
输出高分辨率的光谱图图像,其像素分辨率与放大后的像素分辨率相同