我是使用tensorflow的新手,并且在处理它时遇到了一些困难。 我尝试使用类似于MNIS示例的softmax模型进行简单的分类工作。
我尝试创建批量数据并将dem放入run方法中。 我的第一种方法是使用
sess.run(train_step, feed_dict={x: feature_batch, y_: labels_batch})
导致了不能将张量传递给feed_dict的错误。
经过一些研究,我发现我应该使用。feat, lab = sess.run([feature_batch, feature_batch])
sess.run(train_step, feed_dict={x: feat, y_: lab})
尝试后,我的脚本不会终止计算,但也不会打印任何错误。
有没有人提示为什么它不起作用?
孔文件如下:
def input_pipeline(filename='dataset.csv', batch_size=30, num_epochs=None):
filename_queue = tf.train.string_input_producer([filename], num_epochs=num_epochs, shuffle=True)
features, labels = read_from_cvs(filename_queue)
min_after_dequeue = 10000
capacity = min_after_dequeue + 3 * batch_size
feature_batch, label_batch = tf.train.shuffle_batch(
[features, labels], batch_size=batch_size, capacity=capacity,
min_after_dequeue=min_after_dequeue)
return feature_batch, label_batch
def tensorflow():
x = tf.placeholder(tf.float32, [None, num_attributes])
W = tf.Variable(tf.zeros([num_attributes, num_types]))
b = tf.Variable(tf.zeros([num_types]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, num_types])
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
feature_batch, label_batch = input_pipeline()
for _ in range(1200):
feat, lab = sess.run([feature_batch, feature_batch])
sess.run(train_step, feed_dict={x: feat, y_: lab})
coord.request_stop()
coord.join(threads)
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#print(sess.run(accuracy, feed_dict={x: feature_batch, y_: label_batch}))
答案 0 :(得分:2)
您可以在模型定义中直接使用张量。例如:
def tensorflow():
x, y_ = input_pipeline()
W = tf.Variable(tf.zeros([num_attributes, num_types]))
b = tf.Variable(tf.zeros([num_types]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for _ in range(1200):
sess.run(train_step)
或者您应该在tf.train.shuffle_batch
中使用占位符。例如:
#...omit
features_placeholder = tf.placeholder(...)
labels_placeholder = tf.placeholder(...)
x, y_ = tf.train.shuffle_batch(
[features_placeholder, labels_placeholder], batch_size=batch_size, capacity=capacity,
min_after_dequeue=min_after_dequeue)
W = tf.Variable(tf.zeros([num_attributes, num_types]))
b = tf.Variable(tf.zeros([num_types]))
#...omit
for _ in range(1200):
sess.run(train_step, feed_dict={features_placeholder: ..., labels_placeholder: ...})
答案 1 :(得分:2)
我怀疑问题是这两行的顺序:
threads = tf.train.start_queue_runners(coord=coord)
feature_batch, label_batch = input_pipeline()
对tf.train.start_queue_runners()
的调用将为已定义到那一点的所有输入管道阶段启动后台线程。对input_pipeline()
的调用会创建两个新的输入管道阶段(在tf.train.string_input_producer()
和tf.train.shuffle_batch()
的调用中)。这意味着两个新阶段的后台线程将不会启动,程序将挂起。
解决方案是颠倒这些行的顺序:
feature_batch, label_batch = input_pipeline()
threads = tf.train.start_queue_runners(coord=coord)