'numpy.dtype'对象没有属性'base_dtype'

时间:2019-04-10 06:16:22

标签: tensorflow

我是tensorflow的新手,所以这就是我的代码展开的方式!

import tensorflow as tf
import tensorflow.contrib.learn as learn
mnist = learn.datasets.mnist.read_data_sets('MNIST-data',one_hot=True)
import numpy as np
M = tf.Variable(tf.zeros([784,10]))
B = tf.Variable(tf.zeros([10]))
image_holder = tf.placeholder(tf.float32,[None,784])
label_holder = tf.placeholder(tf.float32,[None,10])
predicted_value = tf.add(tf.matmul(image_holder,M),B)
loss= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predicted_value , labels=label_holder))
learning_rate = 0.01
num_epochs = 1000
batch_size = 100
num_batches = int(mnist.train.num_examples/batch_size)
init = tf.global_variables_initializer()
optimizer =  tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess:
    sess.run(init)
    for _ in range(num_epochs):
        for each_batch in range(num_batches):
            current_image, current_image_label = mnist.train.next_batch(batch_size)
            optimizer_value,loss = sess.run([optimizer,loss],feed_dict={image_holder:current_image,label_holder:current_image_label})
        print ("The loss value is {} \n".format(loss))    

但是我遇到的问题是,这个奇怪的错误说

'numpy.dtype' object has no attribute 'base_dtype'

我不知道我认为绝对正确的代码有什么问题。关于这个问题有帮助吗?

2 个答案:

答案 0 :(得分:1)

首先发表评论:

  • 变量的初始化应始终在构造图的末尾
  • 优化程序和火车操作应该分开;没必要,但这是一个好习惯。
  • 另外,在运行sess.run(variable)时,请确保不要将其命名为同一变量。也就是说,请确保您没有这样做:variable=sess.run(variable)。因为您覆盖了它。

这里的错误是最后一个错误。因此,一旦工作,代码可能类似于:

M = tf.Variable(tf.zeros([784,10]), dtype=tf.float32)
B = tf.Variable(tf.zeros([10]), dtype=tf.float32)

image_holder = tf.placeholder(tf.float32,[None,784])
label_holder = tf.placeholder(tf.float32,[None,10])
predicted_value = tf.add(tf.matmul(image_holder,M),B)
loss= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predicted_value , labels=label_holder))
learning_rate = 0.01
num_epochs = 1000
batch_size = 100
num_batches = int(mnist.train.num_examples/batch_size)

optimizer =  tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    for _ in range(num_epochs):
        for each_batch in range(num_batches):
            current_image, current_image_label = mnist.train.next_batch(batch_size)

            optimizer_value,loss_value = sess.run([train_op,loss],feed_dict={image_holder:current_image,label_holder:current_image_label})
        print ("The loss value is {} \n".format(loss_value)) 

希望这对您有所帮助

答案 1 :(得分:0)

更明确地说,当您第一次执行loss时,您只是用'loss'值对节点sess.run([_, loss]) 进行了覆盖。因此,第二次for循环时,session看到了一个numpy值来代替原始loss op