如何在tensorflow中将'mnist.train.next_batch'交换为'tf.train.batch'?

时间:2017-07-21 02:36:16

标签: python tensorflow neural-network deep-learning

tensorflow 1.2

我开始使用tf中的mnist数据集学习张量流。

我改变了数据集。 我遇到了将'batch_xs,batch_ys = mnist.train.next_batch(batch_size)'交换为'batch_xs,batch_ys = tf.train.batch([X,Y],batch_size = batch_size)'的问题。

我想知道如何在tensorflow中应用minibatch。

错误消息

TypeError:Feed的值不能是tf.Tensor对象。可接受的Feed值包括Python标量,字符串,列表,numpy ndarrays或TensorHandles。

train_data.shape, train_labels.shape # numpy
# ((10000, 20, 20, 3), (10000, 2))


X = tf.placeholder(tf.float32, [10000, 20, 20, 3])
Y = tf.placeholder(tf.float32, [10000, 10])

W1 = tf.Variable(tf.random_normal([4, 4, 3, 32], stddev=0.01))

L1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding = 'SAME')
L1 = tf.nn.relu(L1)
L1 = tf.nn.max_pool(L1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
L1 = tf.reshape(L1, [-1, 10 * 10 * 32])

W2 = tf.get_variable('W2', shape=[10 * 10 * 32, 10], initializer=tf.contrib.layers.xavier_initializer())
b = tf.Variable(tf.random_normal([10]))

hypothesis = tf.matmul(L1, W2) + b

learning_rate = 0.001
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=hypothesis, labels = Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

# hyper parameters
learning_rate = 0.001
training_epochs = 5
batch_size = 100

for epoch in range(training_epochs):
    avg_cost = 0
    total_batch = int(10000 / batch_size)
    for i in range(total_batch):
        # batch_xs, batch_ys = tf.train.batch([X, Y], batch_size)
        batch_xs, batch_ys = tf.train.batch([X, Y], batch_size = batch_size)
        feed_dict = {X: batch_xs, Y: batch_ys}
        c, _, = sess.run([cost, optimizer], feed_dict = feed_dict)
        avg_cost += c / total_batch
    print('Epoch: ', '%04d' % (epoch + 1), 'cost: ', '{:.9f}'.format(avg_cost))



TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.

2 个答案:

答案 0 :(得分:0)

batch_xs, batch_ys = tf.train.batch([X, Y], batch_size = batch_size)此声明仍在构建图形阶段。除非你打电话给sess.run(...),否则什么都不会发生。

您可以访问tensorflow doc和教程来应用minibatch。

答案 1 :(得分:0)

您无法在图表中添加任何张量流API,例如您在我的代码中执行的batch_xs, batch_ys = tf.train.batch([X, Y], batch_size = batch_size)

它使batch_xs和batch_ys成为张量。您可以做的是在图表中编写batch_xs, batch_ys = tf.train.batch([X, Y], batch_size = batch_size),评估batch_xs和batch_ys,然后在流程中提供。