我的网络为每个预测产生相同的输出。我在熊猫数据框中有大约49,000个数据样本 我怎样才能解决这个问题?
# Input data X.as_matrix() => 8 dimensional array
# One example: [1.50000000e+00,3.00000000e+00,6.00000000e+00,2.40000000e+01,9.50000000e+01,3.00000000e+03,5.00000000e+00,1.50000000e+00]
import tensorflow as tf
import tflearn
with tf.Graph().as_default():
net = tflearn.input_data([None, 8])
net = tflearn.fully_connected(net, 20, activation='softmax',weights_init='normal',regularizer='L2', weight_decay=0.001)
net = tflearn.fully_connected(net, 3, activation='softmax',weights_init='normal')
sgd = tflearn.Adam(learning_rate=0.01)
net = tflearn.regression(net, optimizer=sgd,loss='categorical_crossentropy')
model = tflearn.DNN(net)
model.fit(X.as_matrix(), Y, show_metric=True, batch_size=10, n_epoch=2, snapshot_epoch=False)
print(model.predict([X.as_matrix()[1]]))
print(model.predict([X.as_matrix()[2]]))
print(model.predict([X.as_matrix()[3]]))
Result:
[0.6711940169334412,0.24268993735313416,0.08611597120761871]
[0.6711940169334412,0.24268993735313416,0.08611597120761871]
[0.6711940169334412,0.24268993735313416,0.08611597120761871]
Actual:
[ 0, 1, 0]
[ 1, 0, 0]
[ 0, 0, 1]
答案 0 :(得分:0)
尝试使用sigmoid或relu而不是softmax。我得到了更好的预测2.也许你想在第一层使用sigmoid而在第二层使用sigmoid。只需与它们一起玩并组合它们,这样你就可以有更好的预测。还可以尝试其他损失功能。