Librosa Keras Python神经网络错误:与批次更新相比,批次结束较慢

时间:2019-01-31 04:30:31

标签: python machine-learning keras neural-network librosa

我最近尝试执行一个实验,该实验使用Keras在Python IDE IDLE中编写的神经网络用于分析GTZAN歌曲数据集。我试图改变层次,以查看是否对性能有影响。我的实验基于一篇详细介绍该项目基础的文章:

https://medium.com/@navdeepsingh_2336/identifying-the-genre-of-a-song-with-neural-networks-851db89c42f0

此实验中显示的程序构成程序:

import librosa
import librosa.feature
import librosa.display
import glob
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.utils.np_utils import to_categorical

def display_mfcc(song):
    y, _ = librosa.load(song)
    mfcc = librosa.feature.mfcc(y)

    plt.figure(figsize=(10, 4))
    librosa.display.specshow(mfcc, x_axis='time', y_axis='mel')
    plt.colorbar()
    plt.title(song)
    plt.tight_layout()
    plt.show()


def extract_features_song(f):
    y, _ = librosa.load(f)

    mfcc = librosa.feature.mfcc(y)
    mfcc /= np.amax(np.absolute(mfcc))

    return np.ndarray.flatten(mfcc)[:25000]

def generate_features_and_labels():
    all_features = []
    all_labels = []
    genres = ['blues', 'classical', 'country', 'disco', 'hiphop',
    'jazz', 'metal', 'pop', 'reggae', 'rock']

    for genre in genres:
        sound_files = glob.glob('genres/'+genre+'/*.au')
        print('Processing %d songs in %s genre...' % 
        (len(sound_files), genre))
        for f in sound_files:
            features = extract_features_song(f)
            all_features.append(features)
            all_labels.append(genre)

    label_uniq_ids, label_row_ids = np.unique(all_labels,   
    (len(sound_files), genre))
    label_row_ids = label_row_ids.astype(np.int32, copy=False)
    onehot_labels = to_categorical(label_row_ids, 
    len(label_uniq_ids))

    return np.stack(all_features), onehot_labels


features, labels = generate_features_and_labels()

print(np.shape(features))
print(np.shape(labels))

training_split = 0.8

alldata = np.column_stack((features, labels))

np.random.shuffle(alldata)
splitidx = int(len(alldata) * training_split)
train, test = alldata[:splitidx,:], alldata[splitidx:,:]

print(np.shape(train))
print(np.shape(test))

train_input = train[:,:-10]
train_labels = train[:,-10:]

test_input = test[:,:-10]
test_labels = test[:,-10:]

print(np.shape(train_input))
print(np.shape(train_labels))

model = Sequential([
    Dense(100, input_dim=np.shape(train_input)[1]),
    Activation('relu'),
    Dense(10),
    Activation('softmax'),
    ])

model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
print(model.summary())

model.fit(train_input, train_labels, epochs=10, batch_size=32,         
validation_split=0.2)

loss, acc = model.evaluate(test_input, test_labels, batch_size=32)

print("Done!")
print("Loss: %.4f, accuracy: %.4f" % (loss, acc))

Python开始打印预期的响应:

Using TensorFlow backend.
Processing 100 songs in blues genre...
Processing 100 songs in classical genre...
Processing 100 songs in country genre...
Processing 100 songs in disco genre...
Processing 100 songs in hiphop genre...
Processing 100 songs in jazz genre...
Processing 100 songs in metal genre...
Processing 100 songs in pop genre...
Processing 100 songs in reggae genre...
Processing 100 songs in rock genre...
(1000, 25000)
(1000, 10)
(800, 25010)
(200, 25010)
(800, 25000)
(800, 10)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 100)               2500100   
_________________________________________________________________
activation_1 (Activation)    (None, 100)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1010      
_________________________________________________________________
activation_2 (Activation)    (None, 10)                0         
=================================================================
Total params: 2,501,110
Trainable params: 2,501,110
Non-trainable params: 0
_________________________________________________________________

无     训练640个样本,验证160个样本     时代1/10

32/640 [>.............................] - ETA: 7s - loss: 2.3115 -     
acc: 0.0625
64/640 [==>...........................] - ETA: 4s - loss: 3.3871 - 
acc: 0.1094
96/640 [===>..........................] - ETA: 3s - loss: 3.2331 - 
acc: 0.1562
128/640 [=====>........................] - ETA: 3s - loss: 2.9779 - 
acc: 0.1797
160/640 [======>.......................] - ETA: 2s - loss: 2.7778 - 
acc: 0.1938
192/640 [========>.....................] - ETA: 2s - loss: 2.6937 - 
acc: 0.2031
224/640 [=========>....................] - ETA: 2s - loss: 2.5870 - 
acc: 0.2232
256/640 [===========>..................] - ETA: 2s - loss: 2.5168 - 
acc: 0.2305
288/640 [============>.................] - ETA: 1s - loss: 2.5075 -   
acc: 0.2153

但这被定期打印其他纪元信息和错误消息打断了:

Warning (from warnings module):
  File     "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/callbacks.py", line 122
% delta_t_median)
UserWarning: Method on_batch_end() is slow compared to the batch update (0.102488). Check your callbacks.

我不确定如何解决此问题。感谢您提供的任何帮助。

0 个答案:

没有答案