如何在张量流中进行简单的逻辑回归?

时间:2016-05-04 14:52:12

标签: python tensorflow

我的输入数据一步 numpy数组长度为36 float

[-0.712982    1.14461327 -0.46141151 -0.39443004 -0.44848472 -0.65676075
  0.56058383 -0.61031222  0.43211082 -0.74852234  1.28183317  0.79719085
 -0.28156522  0.16901374 -0.73715878  0.69877005 -0.40633941  0.01085454
 -0.33675554 -0.37056464 -0.43088505  0.3327457  -0.15905562  0.72995877
  0.56962079  0.10286932  0.25698286  0.89823145 -0.12923111  0.3219386
  0.10118762  1.29127014 -0.22283298  0.75640506  0.79971719  0.60000002]

我的部分代码:

X = tf.placeholder(tf.float32, (36))
Y = tf.placeholder(tf.float32)

# Create Model

# Set model weights
    W = tf.Variable(tf.zeros([36], name="weight"))
    b = tf.Variable(tf.zeros([1]), name="bias")


# Construct model
    activation = tf.add(tf.matmul(X, W), b)

在这种情况下,tf.matmul不起作用(ValueError:Shape(36,)必须具有等级2)。 我需要做哪些更改才能将激活作为单个浮点数?

1 个答案:

答案 0 :(得分:1)

只需使用:

activation = tf.add(tf.mul(X, W), b)

查看https://github.com/nlintz/TensorFlow-Tutorials/blob/master/1_linear_regression.py中的简单线性回归示例(和其他):

import tensorflow as tf
import numpy as np

trX = np.linspace(-1, 1, 101)
trY = 2 * trX + np.random.randn(*trX.shape) * 0.33 # create a y value which is approximately linear but with some random noise

X = tf.placeholder("float") # create symbolic variables
Y = tf.placeholder("float")

w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix
y_model = tf.mul(X, w)
cost = tf.square(Y - y_model) # use square error for cost function
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # construct an optimizer to minimize cost and fit line to my data

# Launch the graph in a session
with tf.Session() as sess:
    # you need to initialize variables (in this case just variable W)
    tf.initialize_all_variables().run()

    for i in range(100):
        for (x, y) in zip(trX, trY):
            sess.run(train_op, feed_dict={X: x, Y: y})

    print(sess.run(w))  # It should be something around 2