Tensorflow逻辑回归不同的输出

时间:2016-12-30 15:21:07

标签: python numpy machine-learning tensorflow

你好我试图使用tensor-flow进行逻辑回归(对不起,如果我的代码看起来很愚蠢)并且我已经在numpy中编写了一次cost函数,并且在张量流中编写了一次,我得到了相同的起始权重的不同结果,有人可以帮助我吗?

 import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_blobs


DataSize=1000


data, y = make_blobs(n_samples=1000, centers=2, n_features=2,random_state=1,center_box=(-5.0,5.0))
plt.scatter(data[:,0],data[:,1])
plt.show(block=False)
x=np.linspace(-1,5,1000)



b=np.ones([1,1])

W=np.ones([2,1])
asd=W*x.T+b
pred=np.dot(data,W)+b
plt.plot(x,asd[0])

plt.show(block=False)
result=((1))/(1+np.exp(-pred))
s=np.log(result)


J=-(y.T.dot(s)+(1-y).T.dot(1-s))/1000
print ("cost in numpy",J)


#
with tf.variable_scope("scopi",reuse=True):

    X = tf.placeholder(tf.float32 )
    Y = tf.placeholder(tf.float32 )
    b = tf.Variable(tf.ones((1,1)),name="bias")
    W = tf.Variable(tf.ones((1,2)),name="weights")


    ypred=W*X+b

    hx=tf.reduce_sum(tf.sigmoid(ypred),reduction_indices=1)

    #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
    J=-tf.reduce_sum(tf.mul(tf.transpose(Y),hx)+tf.mul(tf.transpose(1-Y),(1-hx)))/1000

    opti=tf.train.AdamOptimizer(0.1).minimize(J)

with tf.Session() as session:
    session.run(tf.initialize_all_variables())

    h = session.run(J, feed_dict={X: data, Y: y})

    print ("cost in tensorflow", h)

# epoch = 100
    # for i in range(epoch):
    #     for j in range(DataSize):
    #         session.run(opti, feed_dict={X: data[j], Y: y[j]})
    #
    #
    #
    #
    #
    #     if i%10==0:
    #
    #         a=session.run(J,feed_dict={X:data,Y:y})
    #
    #         print ("cost ", a)

成本函数的成本样本:

('numpy',数组([2.37780175]))('费用在tensorflow',0.073667422)

1 个答案:

答案 0 :(得分:0)

您正在将权重初始化为此行中的随机值:

session.run(tf.initialize_all_variables())

在该行之后,您可以使用以下内容设置值:

session.run(tf.assign(b,tf.ones((1,2))))