我尝试第一次尝试张量流 - 我尝试通过采用10维X输入向量和输出标量Y来学习线性回归量。具体来说,我试过使用基于封闭形式的梯度解决方案。
我收到了以下错误,我不确定自己做错了什么。任何指向我正确方向的东西都将非常感激!
PS C:\Users\Dave\Documents\School\Deep Learning\Assignment_1> python test1.py
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfu
lly opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfu
lly opened CUDA library cudnn64_5.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfu
lly opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfu
lly opened CUDA library nvcuda.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfu
lly opened CUDA library curand64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:885] F
ound device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.86
pciBusID 0000:01:00.0
Total memory: 8.00GiB
Free memory: 6.63GiB
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:906] D
MA: 0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:916] 0
: Y
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:975] C
reating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
Traceback (most recent call last):
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 10
return fn(*args)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 10
status, run_metadata)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py",
n_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [10000,10] vs. [10000]
[[Node: sub = Sub[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Add, _recv_Placeholder_1_0/_7)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test1.py", line 43, in <module>
c = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 76
run_metadata_ptr)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 96
feed_dict_string, options, run_metadata)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 10
target_list, options, run_metadata)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 10
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [10000,10] vs. [10000]
[[Node: sub = Sub[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Add, _recv_Placeholder_1_0/_7)]]
Caused by op 'sub', defined at:
File "test1.py", line 25, in <module>
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\math_ops.py", line 814,
return func(x, y, name=name)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line
result = _op_def_lib.apply_op("Sub", x=x, y=y, name=name)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py
op_def=op_def)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 224
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\Dave\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 112
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Incompatible shapes: [10000,10] vs. [10000]
[[Node: sub = Sub[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Add, _recv_Placeholder_1_0/_7)]]
这是我的代码:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
rng = np.random
#from IPython import get_ipython
#get_ipython().run_line_magic('matplotlib', 'inline')
learning_rate = 0.01
training_epochs = 1000
display_step = 50
train_X = np.loadtxt('data.txt', usecols=[0,1,2,3,4,5,6,7,8,9])
train_Y = np.loadtxt('data.txt', usecols=[10])
n_samples = train_X.shape[0]
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
W = tf.Variable(rng.randn(), name = "weight")
b = tf.Variable(rng.randn(), name = "bias")
#build the model
pred = tf.add(tf.mul(X,W), b)
#mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
#gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#initialize the variables
init = tf.global_variables_initializer()
#launch the graph
with tf.Session() as sess:
sess.run(init)
#fit training data
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
sess.run(optimizer, feed_dict = {X: x, Y: y})
#display logs
if (epoch+1) % display_step ==0:
c = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
# print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f".format(c), \
#"W=", sess.run(W), "b=" sess.run(b)
#print "Optimization done"
training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
#print "Training cost=", training_cost, "W=", sess.run(W), "b=" sess.run(b), '\n'
#display graphically
plt.plot(train_X, train_Y, 'ro', label = 'Orig data')
plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label = 'Fitted Line')
plt.legend()
plt.show()
答案 0 :(得分:1)
我不知道张量流,我不确定你的代码到底发生了什么,所以我试图根据numpy的行为进行有根据的猜测。我打算将其添加为评论,但它太长了。
加载训练数据时,train_X
的形状为(10000,10)
,因为它有10列(它是2d数组),而train_Y
的形状为(10000,)
,因为它是一个单独的列(它是一个2d数组)。这两种形状无法一起广播,因此pred
中的Y
和pred-Y
具有不兼容的形状。您需要为此转置train_X
,或将train_Y
转换为形状(10000,1)
的数组,以使它们兼容。前者可以通过将unpack=True
传递给np.loadtxt
来完成;后者与train_Y = train_Y[:,None]
类似(至少我怀疑unpack=True
在这种情况下无济于事,但无论如何都值得一试。)
但是如果您转置阵列,则需要注意您的训练循环仍然有效。目前,您的(10000,10)
形数组相当于长度为10000的长度为10的列表,而您的(10000,)
形数组相当于单个长度为10000的列表。这些可以很好地zip
合在一起。如果你转置,说train_X
使广播工作,那么你需要修改这个循环:
for (x, y) in zip(train_X.T, train_Y):
sess.run(optimizer, feed_dict = {X: x, Y: y})
事后看来,我可能意味着您需要转置回train_X
,以确保train_X
和train_Y
的第一个维度匹配以进行压缩。
换位也可能影响后面的步骤,例如绘图。如果有任何奇怪的输出或错误,您需要转置回来。甚至更好:只转换特定于张量流的操作(但我不熟悉这部分,所以我不知道是否以及如何以惯用方式完成此操作)。