我使用MNIST数据库在Tensorflow
页面上运行了所有示例。现在我试图运行我自己的例子而且我真的没有得到它。
说我有这个csv表:
它有5000行。最后一列是每行的标签,这一行由多个功能组成。 现在我的第一个具体例子。我想在这些数据上训练一个NN,在这里我已经完成了:
import tensorflow as tf
import numpy as np
import csv
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# read training data
Training_file = open('onetest.csv', 'r', newline='')
reader = csv.reader(Training_file)
row = next(reader)
number_of_rows = 2431
x = tf.placeholder('float',[None,len(row[:-1])])
w = tf.Variable(tf.zeros([len(row[:-1]),25]))
b = tf.Variable(tf.zeros([25]))
model = tf.add(tf.matmul(x,w),b)
y_ = tf.placeholder('float',[25,None])
y = tf.nn.softmax(model)
cross_entropy= -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
index =1
batch_xs =[]
batch_ys= []
for row in reader:
batch_xs.append(row[:-1])
batch_ys.append(row[-1])
print(len(batch_xs),len(batch_ys))
index +=1
if index%10==0:
sess.run(train_step,feed_dict={x:batch_xs, y_:batch_ys})
correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
batch_xs.clear();
batch_ys.clear();
这是我得到的错误:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-4dbaa38c4d9c> in <module>()
29 index +=1
30 if index%10==0:
---> 31 sess.run(train_step,feed_dict={x:batch_xs, y_:batch_ys})
32 correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1))
33 accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
765 try:
766 result = self._run(None, fetches, feed_dict, options_ptr,
--> 767 run_metadata_ptr)
768 if run_metadata:
769 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
942 'Cannot feed value of shape %r for Tensor %r, '
943 'which has shape %r'
--> 944 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
945 if not self.graph.is_feedable(subfeed_t):
946 raise ValueError('Tensor %s may not be fed.' % subfeed_t)
ValueError: Cannot feed value of shape (9,) for Tensor 'Placeholder_17:0', which has shape '(25, ?)'
我已经改变了索引的价值,但它没有解决它,所以我想我误解了一些东西。将不胜感激任何解释。
答案 0 :(得分:1)
在此行y_ = tf.placeholder('float',[25,None])
中,y_
被定义为具有25行(以及任意数量的列)的数据的占位符。然后在您的代码中,由于if index%10==0:
行batch_ys
有10行,这就是您收到此错误的原因。
答案 1 :(得分:0)
所以我明白了,这是鳕鱼,它可能会帮助那里的人:
import tensorflow as tf
import numpy as np
import csv
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# read training data
Training_file = open('onetest.csv', 'r', newline='')
reader = csv.reader(Training_file)
row = next(reader)
x = tf.placeholder('float',[None,len(row[:-1])])
w = tf.Variable(tf.zeros([len(row[:-1]),25]))
b = tf.Variable(tf.zeros([25]))
model = tf.add(tf.matmul(x,w),b)
y_ = tf.placeholder('float',[None,25])
y = tf.nn.softmax(model)
print(y)
cross_entropy= -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
index =0
batch_xs =[]
ys= []
for row in reader:
batch_xs.append(row[:-1])
ys.append(row[-1])
index +=1
if index%25==0:
batch_ys = np.reshape(ys,(1,25))
sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})
correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
print(sess.run(accuracy,feed_dict={x:batch_xs,y_:batch_ys}))
batch_xs.clear()
ys.clear()