张力流形状不正确

时间:2017-05-30 03:52:27

标签: machine-learning tensorflow deep-learning shape reshape

我一直在尝试使用Tensorflow,但我一直在收到有关数据形状的错误。我从此YouTube教程中获取了我的代码:https://www.youtube.com/watch?v=PwAGxqrXSCs&list=PLQVvvaa0QuDfKTOs3Keq_kaG2P55YRn5v&index=47

我的训练数据如下:

enc0 = np.array([[[1,2,3,4],[0,1,0,1],[-33,0,0,0],[1,1,1,1]],[[2,3,3,2],[0,0,0,0],[9,0,0,0],[0,0,0,1]]]) # shape (2,4,4) ms0 = np.array([[1,6],[2,7]]) # shape (2,2)

我的错误是:

  

ValueError:尺寸大小必须可被10整除,但对于渐变/ Reshape_grad / Reshape'为4。 (op:' Reshape')输入形状:[1,4],[2]。

我认为我的错误正在发生,原因如下:

x = tf.placeholder('float',[None,16])
y = tf.placeholder('float',[4])

enc = enc0.reshape([-1,16])

我的整个代码是:

enc0 = np.array([[[1,2,3,4],[0,1,0,1],[-33,0,0,0],[1,1,1,1]],[[2,3,3,2],[0,0,0,0],[9,0,0,0],[0,0,0,1]]])
ms0 = np.array([[1,6],[2,7]])

n_nodes_hl1 = 500 # hidden layer 1
n_nodes_hl2 = 500
n_nodes_hl3 = 500

n_classes = 10
batch_size = 100 # load 100 features at a time


x = tf.placeholder('float',[None,16]) 
y = tf.placeholder('float',[4])

enc = enc0.reshape([-1,16])
ms = ms0


def neuralNet(data):
    hl_1 = {'weights':tf.Variable(tf.random_normal([16, n_nodes_hl1])),
            'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}

    hl_2 = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
            'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}

    hl_3 = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
            'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))}

    output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
            'biases':tf.Variable(tf.random_normal([n_classes]))}

    l1 = tf.add(tf.matmul(data, hl_1['weights']), hl_1['biases'])
    l1 = tf.nn.relu(l1)

    l2 = tf.add(tf.matmul(l1, hl_2['weights']), hl_2['biases'])
    l2 = tf.nn.relu(l2)

    l3 = tf.add(tf.matmul(l2, hl_3['weights']), hl_3['biases'])
    l3 = tf.nn.relu(l3)

    ol = tf.matmul(l3, output_layer['weights']) + output_layer['biases']

    return ol


def train(x):
    prediction = neuralNet(x)
    print prediction
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
    optimizer = tf.train.AdamOptimizer().minimize(cost) # learning rate = 0.001

    # cycles of feed forward and backprop
    num_epochs = 15

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        for epoch in range(num_epochs):
            epoch_loss = 0
            for _ in range(int(enc.shape[0])):
                epoch_x,epoch_y = enc,ms
                _,c = sess.run([optimizer,cost],feed_dict={x:epoch_x,y:epoch_y})
                epoch_loss += c
            print 'Epoch', epoch + 1, 'completed out of', num_epochs, '\nLoss:',epoch_loss,'\n'

        correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
        accuracy = tf.reduce_mean(tf.cast(correct,'float'))

        print 'Accuracy', accuracy.eval({x:enc, y:ms}) 


train(x)

非常感谢任何有关错误的帮助。

1 个答案:

答案 0 :(得分:1)

原因是您正在从网络生成n_classes预测(n_classes为10),同时将其与y占位符中的4个值进行比较。它应该足够使用

y = tf.placeholder('float', [10])

然后实际将10个值提供给占位符。