在TensorFlow中,如何将2个参数提供给session.run()

时间:2016-02-19 01:42:41

标签: python machine-learning tensorflow

我正试图进入TensorFlow,并尝试对初学者示例进行一些更改。

我正在尝试将Implementing a Neural Network from Scratch Deep MNIST for Experts

合并

我使用X, y = sklearn.datasets.make_moons(50, noise=0.20)获取数据。基本上,这一行给出了2D X(,)和2个Y(0/1)

x = tf.placeholder(tf.float32, shape=[50,2])
y_ = tf.placeholder(tf.float32, shape=[50,2])

网络结构与 Deep MNIST for Experts 相同。不同的是会话运行功能。

sess.run(train_step, feed_dict={x:X, y_:y})

但是这给了

_ValueError: setting an array element with a sequence._

有人能就这个问题给我一些提示吗?这是代码。

import numpy as np
import matplotlib
import tensorflow as tf
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import sklearn.linear_model
sess = tf.InteractiveSession()
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
np.random.seed(0)
X, y = sklearn.datasets.make_moons(50, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)
clf = sklearn.linear_model.LogisticRegressionCV()
clf.fit(X, y)
batch_xs = np.vstack([np.expand_dims(k,0) for k in X])
x = tf.placeholder(tf.float32, shape=[50,2])
y_ = tf.placeholder(tf.float32, shape=[50,2])
W = tf.Variable(tf.zeros([2,2]))
b = tf.Variable(tf.zeros([2]))
a = np.arange(100).reshape((50, 2))
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
sess.run(tf.initialize_all_variables())
for i in range(20000):
sess.run(train_step, feed_dict={x:X, y_:y})

在与TensorFlow斗争之后,这是正确的代码:

# Package imports
import numpy as np
import matplotlib
import tensorflow as tf
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import sklearn.linear_model

rng = np.random

input_dim = 2
output_dim = 2
hidden_dim = 3

np.random.seed(0)
Train_X, Train_Y = sklearn.datasets.make_moons(200, noise=0.20)
Train_X = np.reshape(Train_X, (-1,2))
Train_YY = []  
for i in Train_Y:       #making Train_Y a 2-D list
    if i == 1:
        Train_YY.append([1,0])
    else:
        Train_YY.append([0,1])
print Train_YY
X = tf.placeholder("float",shape=[None,input_dim])
Y = tf.placeholder("float")
W1 = tf.Variable(tf.random_normal([input_dim, hidden_dim], stddev=0.35),
                      name="weights")
b1 = tf.Variable(tf.zeros([1,hidden_dim]), name="bias1")
a1 = tf.tanh(tf.add(tf.matmul(X,W1),b1))
W2 = tf.Variable(tf.random_normal([hidden_dim,output_dim]), name="weight2")
b2 = tf.Variable(tf.zeros([1,output_dim]), name="bias2")
a2 = tf.add(tf.matmul(a1, W2), b2)
output=tf.nn.softmax(a2)
correct_prediction = tf.equal(tf.argmax(output,1), tf.argmax(Y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
cross_entropy = -tf.reduce_sum(Y*tf.log(output))
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())
    for i in range(20000):
        # for (a,d) in zip(Train_X, Train_Y):
        training_cost = sess.run(optimizer, feed_dict={X:Train_X, Y:Train_YY})
        if i%1000 == 0:
            # print "Training cost=", training_cost, "W1=", W1.eval(), "b1=", b1.eval(),"W2=", W2.eval(), "b2=", b2.eval()
            # print output.eval({X:Train_X, Y:Train_YY})
            # print cross_entropy.eval({X:Train_X, Y:Train_YY})
            print "Accuracy = ", accuracy.eval({X:Train_X, Y:Train_YY}) 

1 个答案:

答案 0 :(得分:2)

问题出现是因为您在以下行重新定义y

y = tf.nn.softmax(tf.matmul(x,W) + b)

TensorFlow然后给出一个错误,因为在y_: y中喂feed_dict会给另一个张量提供一个张量,这是不可能的(并且 - 即使它是 - 这个特定的饲料会创建一个循环依赖!)。

解决方案是重写softmax和交叉熵操作:

y_softmax = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y_softmax))