我正在分析.wav文件的频谱图。但是在让代码最终工作之后,我遇到了一个小问题。在保存了700多个.wav文件的光谱图之后,我意识到它们基本上看起来都一样!这不是因为它们是相同的音频文件,而是因为我不知道如何将绘图的比例更改为更小(所以我可以弄清楚差异)。
我已经尝试通过查看此StackOverflow帖子来解决此问题 Changing plot scale by a factor in matplotlib
我将在下面显示两个不同.wav文件的图表
信不信由你,这些是两个不同的.wav文件,但它们看起来非常相似。如果规模如此广泛,计算机尤其无法获得这两个.wav文件的差异。
我的代码在
下面def individualWavToSpectrogram(myAudio, fileNameToSaveTo):
print(myAudio)
#Read file and get sampling freq [ usually 44100 Hz ] and sound object
samplingFreq, mySound = wavfile.read(myAudio)
#Check if wave file is 16bit or 32 bit. 24bit is not supported
mySoundDataType = mySound.dtype
#We can convert our sound array to floating point values ranging from -1 to 1 as follows
mySound = mySound / (2.**15)
#Check sample points and sound channel for duel channel(5060, 2) or (5060, ) for mono channel
mySoundShape = mySound.shape
samplePoints = float(mySound.shape[0])
#Get duration of sound file
signalDuration = mySound.shape[0] / samplingFreq
#If two channels, then select only one channel
#mySoundOneChannel = mySound[:,0]
#if one channel then index like a 1d array, if 2 channel index into 2 dimensional array
if len(mySound.shape) > 1:
mySoundOneChannel = mySound[:,0]
else:
mySoundOneChannel = mySound
#Plotting the tone
# We can represent sound by plotting the pressure values against time axis.
#Create an array of sample point in one dimension
timeArray = numpy.arange(0, samplePoints, 1)
#
timeArray = timeArray / samplingFreq
#Scale to milliSeconds
timeArray = timeArray * 1000
plt.rcParams['agg.path.chunksize'] = 100000
#Plot the tone
plt.plot(timeArray, mySoundOneChannel, color='Black')
#plt.xlabel('Time (ms)')
#plt.ylabel('Amplitude')
print("trying to save")
plt.savefig('/Users/BillyBobJoe/Desktop/' + fileNameToSaveTo + '.jpg')
print("saved")
#plt.show()
#plt.close()
如何修改此代码以提高图形的灵敏度,以便使两个.wav文件之间的差异更加明显?
谢谢!
[UPDATE]
我试过用过
plt.xlim((0, 16000))
我需要一种方法来改变每个单位的规模。以便在我将x轴从0 - 16000
更改时填写图表答案 0 :(得分:1)
如果问题是:如何限制xaxis上的比例,比如0到1000之间,你可以这样做:
plt.xlim((0, 1000))