运行成本S型成本函数时出错

时间:2019-07-31 11:36:43

标签: python tensorflow deep-learning

我有一个训练数据,有1000行。我正在使用Tensorflow训练此数据。还尝试将其分成大小为32的迷你批。在训练数据时,我遇到了如下所述的错误

InvalidArgumentError:不兼容的形状:[1000]与[32]      [[{{node logistic_loss_1 / mul}}]]

相反,如果我不将训练数据划分为多个小批,或者不使用单个大小为1000的小批量,则代码可以正常工作。

我将权重定义为tf.Variables并运行tensorflow会话。参见下面的代码


def sigmoid_cost(z,Y):

    print("Entered Cost")
    z = tf.squeeze(z)
    Y = tf.cast(Y_train,tf.float64)

    logits = tf.transpose(z)
    labels = (Y)

    print(logits.shape)
    print(labels.shape)

    return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=labels,logits=logits))


def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
          num_epochs = 1500, minibatch_size = 32, print_cost = True):

    hidden_layer = 4

    m,n = X_train.shape
    n_y = Y_train.shape[0]

    X = tf.placeholder(tf.float64,shape=(None,n), name="X")    
    Y = tf.placeholder(tf.float64,shape=(None),name="Y")     
    parameters = init_params(n)

    z4, parameters = fwd_model(X,parameters)
    cost = sigmoid_cost(z4,Y)
    num_minibatch = m/minibatch_size
    print("Getting Minibatches")
    num_minibatch = tf.cast(num_minibatch,tf.int32)
    optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
    print("Gradient Defination Done")

    init = tf.global_variables_initializer()
    init_op = tf.initialize_all_variables()

    with tf.Session() as sess:
        sess.run(init)
        sess.run(init_op)
        for epoch in range(0,num_epochs):
            minibatches = []
            minibatches = minibatch(X_train,Y_train,minibatch_size)
            minibatch_cost = 0

            for i in range (0,len(minibatches)):
                (X_m,Y_m) = minibatches[i]
                Y_m = np.squeeze(Y_m)
                print("Minibatch %d X shape Y Shape ",i, X_m.shape,Y_m.shape)
                _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: X_m, Y: Y_m})
                print("Mini Batch Cost is ",minibatch_cost)
            epoch_cost = minibatch_cost/num_minibatch

            if print_cost == True and epoch % 100 == 0:
                print ("Cost after epoch %i: %f" % (epoch, epoch_cost))

    print(epoch_cost)

由于某种原因,在运行成本函数时,X或Y批次的大小被视为32、100,反之亦然。任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

我认为由于Y = tf.cast(Y_train, tf.float64)函数中的sigmoid_cost行,您遇到了错误。在这里,Y_train1000行,但是损失函数期望32(这是您的批量大小)。

应为Y = tf.cast(Y, tf.float64)。实际上,这里不需要强制转换数据类型,因为Y已经是tf.float64类型。检查以下行:

Y = tf.placeholder(tf.float64,shape=(None),name="Y")

这就是为什么当您使用大小为1000(完整的Y_train数据)的单个小批处理时,您的代码可以正常工作的原因。