我想绘制这个简单神经网络的输出:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)
model.test_on_batch(x_test, y_test)
model.metrics_names
我已经绘制了培训和验证的准确度和损失:
print(history.history.keys())
# "Accuracy"
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
现在我想从model.test_on_batch(x_test, y_test)
添加并绘制测试集的准确度,但是从model.metrics_names
我获得了用于绘制训练数据准确性的相同值'acc' { {1}}。我怎样才能绘制测试集的准确度?
答案 0 :(得分:6)
它是相同的,因为你在测试集上训练,而不是在火车上训练。不要这样做,只需训练训练集:
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)
转变为:
history = model.fit(x_train, y_train, nb_epoch=10, validation_split=0.2, shuffle=True)
答案 1 :(得分:2)
import keras
from matplotlib import pyplot as plt
history = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
答案 2 :(得分:0)
如下所示在测试数据上验证模型,然后绘制准确性和损失
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, nb_epoch=10, validation_data=(X_test, y_test), shuffle=True)
答案 3 :(得分:0)
尝试
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.show()