我是机器学习的新手,并为回归输出创建了神经网络。我有〜95000个培训示例和〜24000个测试示例。我想知道如何评估我的模型并得到训练和测试错误?如何知道该回归模型的准确性?我的Y变量值范围在100-200之间,X在数据集中有9个输入要素。
这是我的代码:
import pandas as pd
from keras.layers import Dense, Activation,Dropout
from keras.models import Sequential
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from matplotlib import pyplot
# Importing the dataset
# Importing the dataset
dataset = pd.read_csv('data2csv.csv')
X = dataset.iloc[:,1:10].values
y = dataset.iloc[:, :1].values
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Initialising the ANN
model = Sequential()
# Adding the input layer and the first hidden layer
model.add(Dense(10, activation = 'relu', input_dim = 9))
# Adding the second hidden layer
model.add(Dense(units = 5, activation = 'sigmoid'))
model.add(Dropout(0.2))
# Adding the third hidden layer
model.add(Dense(units = 5, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(units = 5, activation = 'relu'))
model.add(Dense(units = 5, activation = 'relu'))
# Adding the output layer
model.add(Dense(units = 1))
#model.add(Dense(1))
# Compiling the ANN
model.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics=['mae','mse','mape','cosine'])
# Fitting the ANN to the Training set
history=model.fit(X_train, y_train,validation_data=(X_val, y_val) ,batch_size = 1000, epochs = 100)
test_loss = model.evaluate(X_test,y_test)
loss = history.history['loss']
acc = history.history['mean_absolute_error']
val_loss = history.history['val_loss']
val_acc = history.history['val_mean_absolute_error']
mape_loss=history.history['mean_absolute_percentage_error']
cosine_los=history.history['cosine_proximity']
pyplot.plot(history.history['mean_squared_error'])
pyplot.plot(history.history['mean_absolute_error'])
pyplot.plot(history.history['mean_absolute_percentage_error'])
pyplot.plot(history.history['cosine_proximity'])
pyplot.show()
epochs = range(1, len(loss)+1)
plt.plot(epochs, loss, 'ro', label='Training loss')
plt.legend()
plt.show()
y_pred = model.predict(X_test)
plt.plot(y_test, color = 'red', label = 'Real data')
plt.plot(y_pred, color = 'blue', label = 'Predicted data')
plt.title('Prediction')
plt.legend()
plt.show()
[]
模型评估后我的测试损失。请注意,如代码所示,这里有5个损失函数。
1)84.69654303799824 2)7.030169963975834 3)84.69654303799824 4)5.241855282313331 5)-0.9999999996023872
答案 0 :(得分:1)
要评估模型,可以使用evaluate
方法:
test_loss = model.evaluate(X_test, y_test)
它返回给定测试数据上的损耗,该测试数据是使用您在训练期间使用的相同损耗函数(即mean_squared_error
)计算得出的。
此外,如果您想在每个时期结束时失去训练,可以通过History
方法使用fit
对象which is returned:
history = model.fit(...)
loss = history.history['loss']
loss
是一个列表,其中包含每个时期结束时训练的损失值。如果您在训练模型时使用了验证数据(即model.fit(..., validation_data=(X_val, y_val)
)或使用了其他任何指标(例如mean_absolute_error
(即model.compile(..., metrics=['mae'])
),则还可以访问其值:
acc = history.history['mae']
val_loss = history.history['val_loss']
val_acc = history.history['val_mae']
奖金:绘制训练损失曲线:
epochs = range(1, len(loss)+1)
plt.plot(epochs, loss, 'ro', label='Training loss')
plt.legend()
plt.show()
答案 1 :(得分:0)
要显示训练时的验证丢失:
model.fit(X_train, y_train, batch_size = 1000, epochs = 100, validation_data = (y_train,y_test))
我认为您无法通过绘制轻松获得准确性,因为您的输入是9维的,因此您可以绘制每个特征的预测y,只需关闭连接点的线即可,即plt.plot(x,y ,'k。')注意'k',所以没有一行,但是我不确定这是否有用。