我是tensorflow的新手,所以我试图通过解决kaggle上的二进制分类问题来弄脏我的手。我已经使用Sigmoid函数训练了模型,并在测试时获得了非常好的准确性,但是当我尝试将预测导出到df进行提交时,出现以下错误...我已经附上了代码以及预测和输出,请建议我做错了,我怀疑这与我的S型函数有关,谢谢。
This is output of the predictions....the expected is 1s and 0s
INFO:tensorflow:Restoring parameters from ./movie_review_variables
Prections are [[3.8743019e-07]
[9.9999821e-01]
[1.7650980e-01]
...
[9.9997473e-01]
[1.4901161e-07]
[7.0333481e-06]]
#Importing tensorflow
import tensorflow as tf
#defining hyperparameters
learning_rate = 0.01
training_epochs = 1000
batch_size = 100
num_labels = 2
num_features = 5000
train_size = 20000
#defining the placeholders and encoding the y placeholder
X = tf.placeholder(tf.float32, shape=[None, num_features])
Y = tf.placeholder(tf.int32, shape=[None])
y_oneHot = tf.one_hot(Y, 1)
#defining the model parameters -- weight and bias
W = tf.Variable(tf.zeros([num_features, 1]))
b = tf.Variable(tf.zeros([1]))
#defining the sigmoid model and setting up the learning algorithm
y_model = tf.nn.sigmoid(tf.add(tf.matmul(X, W), b))
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_model, labels=y_oneHot)
train_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#defining operation to measure success rate
correct_prediction = tf.equal(tf.argmax(y_model, 1), tf.argmax(y_oneHot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#saving variables
saver = tf.train.Saver()
#executing the graph and saving the model variables
with tf.Session() as sess: #new session
tf.global_variables_initializer().run()
#Iteratively updating parameter batch by batch
for step in range(training_epochs * train_size // batch_size):
offset = (step * batch_size) % train_size
batch_xs = x_train[offset:(offset + batch_size), :]
batch_labels = y_train[offset:(offset + batch_size)]
#run optimizer on batch
err, _ = sess.run([cost, train_optimizer], feed_dict={X:batch_xs, Y:batch_labels})
if step % 1000 ==0:
print(step, err) #print ongoing result
#Print final learned parameters
w_val = sess.run(W)
print('w', w_val)
b_val = sess.run(b)
print('b', b_val)
print('Accuracy', accuracy.eval(feed_dict={X:x_test, Y:y_test}))
save_path = saver.save(sess, './movie_review_variables')
print('Model saved in path {}'.format(save_path))
#creating csv file for kaggle submission
with tf.Session() as sess:
saver.restore(sess, './movie_review_variables')
predictions = sess.run(y_model, feed_dict={X: test_data_features})
subm2 = pd.DataFrame(data={'id':test['id'],'sentiment':predictions})
subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
print("I am done predicting")
INFO:tensorflow:Restoring parameters from ./movie_review_variables
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-85-fd74ed82109c> in <module>()
5 # print('Prections are {}'.format(predictions))
6
----> 7 subm2 = pd.DataFrame(data={'id':test['id'], 'sentiment':predictions})
8 subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
9 print("I am done predicting")
Exception: Data must be 1-dimensional
答案 0 :(得分:0)
您可以看到sigmoid function的定义。这将始终具有连续输出。如果要离散化输出,则需要确定一些阈值,将其设置为1以上,将其设置为0以下。
pred = tf.math.greater(y_model, tf.constant(0.5))
但是,您必须谨慎选择合适的阈值,因为这不能保证您的模型可以通过概率很好地进行校准。您可以基于对某些保留的验证集的最佳区分来选择合适的阈值。
此步骤仅用于评估很重要,因为您将无法通过此操作反向传播损耗信号。
答案 1 :(得分:0)
您需要为S型输出设置一些阈值。例如。将输出分成两个间距为0.5的垃圾箱:
>>> import numpy as np
>>> x = np.linspace(0, 10, 20)
>>> x
array([ 0. , 0.52631579, 1.05263158, 1.57894737, 2.10526316,
2.63157895, 3.15789474, 3.68421053, 4.21052632, 4.73684211,
5.26315789, 5.78947368, 6.31578947, 6.84210526, 7.36842105,
7.89473684, 8.42105263, 8.94736842, 9.47368421, 10. ])
>>> q = 0.5 # The continuous value between two discrete points
>>> y = q * np.round(x/q)
>>> y
array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5.5,
6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. ])