张量流S形函数不起作用

时间:2018-08-21 16:20:42

标签: tensorflow neural-network deep-learning sigmoid

我是张量流和神经网络的新手。

在下面的代码中,一切正常。 但是,只要我取消注释#X21 = tf.sigmoid(X21)即可实现tf.sigmoid函数;我得到的怪异结果在我所有的预测中都等于1。为什么会这样呢?

请注意,我预计的房价将达到数千。

# Set model weights
b1_1 = tf.cast(tf.Variable(np.random.randn(), name="bias"),tf.float64)
b1_2 = tf.cast(tf.Variable(np.random.randn(), name="bias"),tf.float64)
b2_1 = tf.cast(tf.Variable(np.random.randn(), name="bias"),tf.float64)

W1_1 = tf.cast(tf.Variable(np.random.randn(train_X.shape[1], 1), name="bias"),tf.float64)
W1_2 = tf.cast(tf.Variable(np.random.randn(train_X.shape[1], 1), name="bias"),tf.float64)
W2_1 = tf.cast(tf.Variable(np.random.randn(2, 1), name="bias"),tf.float64)

X11 = tf.add(tf.matmul(train_X,W1_1),b1_1)
X12 = tf.add(tf.matmul(train_X,W1_2),b1_2)
X21 = tf.add(tf.matmul(tf.squeeze(tf.transpose(tf.stack((X11,X12)))),W2_1),b2_1)
#X21 = tf.sigmoid(X21)

# placeholders for a tensor that will be always fed.
Y = tf.placeholder('float64', shape = [47, 1])

cost = (tf.reduce_sum(tf.pow(X21-Y, 2))/(2*n_samples))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)

    # Fit all training data
    for epoch in range(training_epochs):
        #for (x, y) in zip(train_X, train_Y):
        sess.run(optimizer, feed_dict={Y: train_Y})
                # Display logs per epoch step
        if (epoch+1) % display_step == 0:
            c = sess.run(cost, feed_dict={Y: train_Y})
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c))

    print("Optimization Finished!")

    training_cost = sess.run(cost, feed_dict={Y: train_Y})
    print("Training cost=", training_cost)
    line = sess.run(X21,feed_dict={Y: train_Y})

0 个答案:

没有答案