我正在尝试进行多元线性回归,但遇到了一些问题。即,我收到以下错误:
ValueError: Cannot feed value of shape (0, 31399, 50) for Tensor 'Placeholder_22:0', which has shape '(1, 50)'
我尝试了X = tf.compat.v1.placeholder(tf.float32, shape=[None, 50])
,但它也会产生错误。
import tensorflow as tf
import numpy as np
import pandas as pd
import csv
from math import sin
a = []
A = 0.01
i = A
cnt = i
while i<=3.14:
q = []
q.append(cnt)
for j in range(2,101,2):
#ins = round(i**j,j)
q.append(i**j)
q.append(sin(i))
a.append(q)
print(q)
#print('##',a)
i += A
cnt += A
with open('sinL.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
for i in a:
writer.writerow(i)
init = tf.compat.v1.global_variables_initializer()
data = pd.read_csv('sinL.csv', sep=',')
xy = np.array(data, dtype=np.float32)
x_data = xy[:, 1:-1]
y_data = xy[:, [-1]]
X = tf.compat.v1.placeholder(tf.float32, shape=[None, 50])
Y = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
W = tf.Variable(tf.random.normal([50,1], mean=0.01, stddev=0.01), name="weight")
b = tf.Variable(tf.random.normal([1]), name="bias")
hypothesis = tf.matmul(X, W) + b
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=1e-10)
train = optimizer.minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for step in range(11):
cost_, hypo_, _ = sess.run([cost,hypothesis, train] , feed_dict={X: x_data, Y: y_data})
if step%10==0:
print(step,cost_,hypo_)
print result::
0 370825000000.0 [[-1.66055486e-01]
[-1.66046411e-01]
[-1.66033715e-01]
[-1.66017383e-01]
...
[ 2.69337825e+06]
[ 3.70506525e+06]
[ 5.08823150e+06]]
10 nan [[nan]
[nan]
[nan]
[nan]
...
[nan]
[nan]
[nan]]
答案 0 :(得分:0)
我通常建议不要使用TensorFlow 1.x代码,而建议使用内置的Keras API迁移到Tensorflow 2.x代码,因为它往往更容易使用。
但是,您看到的错误是因为您将输入X
定义为(1, 50)
或(None, 50)
形状,而实际输入是3-D,即使第一个输入维的大小为0,因此数组中实际上没有数据。我认为您的数据加载到x_data
和y_data
中存在问题。因此,请验证x_data
和y_data
中是否存在有效数据,以及_x
和_y
是二维数组,然后它应该可以工作。