我的网络在第一批开始学习并且看起来没问题,然后在第二批时突然停止使用TypeError!那么为什么第一批呢?或者为什么它在第一次之后就会破裂?麻烦错误......以下是详细信息:
我已经建立了一个CNN,试图预测每个图像的124个功能。图像具有 61×72 像素的尺寸,并且数字的输出矢量具有 124×1 的尺寸。 图像是浮点矩阵,数字在 1 和 -1 之间。 我试图预测的信息在CSV文件中,每行描述一个图像。当我为训练过程加载数据时,我处理每一行并重新整形,也得到网络正在学习的图片。 当我运行程序时,我在第二批 上收到以下错误 ,但是:
" TypeError:Fetch参数2.7674865e + 09具有无效类型,必须是字符串或Tensor。 (无法将float32转换为Tensor或Operation。)"
您能否帮助查明问题所在?这是我的代码:
import tensorflow as tf
import numpy as np
data_in=np.loadtxt(open("images.csv"), delimiter=',',dtype=np.float32);
data_out=np.loadtxt(open("outputmix-124.csv"),
delimiter=',',dtype=np.float32);
x_train = data_in[0:6000, :]
x_test = data_in[6000:10000,:]
y_train = data_out[0:6000, :]
y_test = data_out[6000:10000, :]
batch=600
epochs=10
n = x_test.shape[1] #4392
m = x_train.shape[0] #6000
d = y_test.shape[1] #124
l = y_test.shape[0] #4000
trainX= tf.placeholder(tf.float32, [batch, n], name="X")
trainY = tf.placeholder(tf.float32, [batch, d])
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def maxpool2d(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],
padding='SAME')
def convolutional_neural_network(x):
weights = {'W_c1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'W_c2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'W_fc': tf.Variable(tf.random_normal([18 * 16 * 64, 1024])),
'out': tf.Variable(tf.random_normal([1024, d]))}
biases = {'b_c1': tf.Variable(tf.random_normal([32])),
'b_c2': tf.Variable(tf.random_normal([64])),
'b_fc': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([d]))}
x = tf.reshape(x, shape=[-1,61,72, 1])
conv1 = tf.nn.relu(conv2d(x, weights['W_c1']) + biases['b_c1'])
conv1 = maxpool2d(conv1)
conv2 = tf.nn.relu(conv2d(conv1, weights['W_c2']) + biases['b_c2'])
conv2 = maxpool2d(conv2)
fc = tf.reshape(conv2, [-1, 18 * 16 * 64])
fc = tf.nn.relu(tf.matmul(fc, weights['W_fc']) + biases['b_fc'])
fc = tf.nn.dropout(fc, keep_rate)
output = tf.matmul(fc, weights['out']) + biases['out']
return output
def train_neural_network(x):
prediction = convolutional_neural_network(x)
cost =tf.reduce_mean(tf.pow(prediction-trainY,2))
optimizer = tf.train.AdamOptimizer().minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
epoch_loss = 0
for i in (np.linspace(0,m - batch, m / batch, dtype=np.int32)):
x = x_train[i:i + batch, :]
y = y_train[i:i + batch, :]
sess.run(optimizer, feed_dict={trainX: x, trainY: y})
cost = sess.run(cost, feed_dict={trainX: x, trainY: y})
print("Epoch=", '%04d' % (epoch + 1), "loss=", "
{:.9f}".format(cost))
epoch_loss += cost
print('Epoch', epoch, 'completed out of', epochs, 'loss:',
epoch_loss)
train_neural_network(trainX)
答案 0 :(得分:0)
这是一个相当典型的错误。问题在于变量 成本 。首先,在函数train_neural_network()
的第二行中为其分配损失计算张量:
cost =tf.reduce_mean(tf.pow(prediction-trainY,2))
然后当你运行训练和成本计算时,你会这样做,这就是搞砸的地方:
cost = sess.run(cost, feed_dict={trainX: x, trainY: y})
因为您将损失的值分配给 费用 ,现在这是一个简单的浮点数,而不是Tensor。下一次sess.run()
获取浮点数而不是张量作为第一个参数,并打印上面的错误。
使用 cost_val 之类的内容来存储损失的价值,并将 费用 存储到张量。你当然需要更新打印值的行,所以我改变了这三行:
cost_val = sess.run(cost, feed_dict={trainX: x, trainY: y})
print("Epoch=", '%04d' % (epoch + 1), "loss=", " {:.9f}".format(cost_val))
epoch_loss += cost_val
我在这里发布了完整的修订版本(经过测试的代码;请注意我已经生成了测试数据而不是加载;这是一个可加载且可测试的示例,但是您需要将其更改为加载实际数据):
import tensorflow as tf
import numpy as np
keep_rate = 0.8
#data_in=np.loadtxt(open("images.csv"), delimiter=',',dtype=np.float32);
#data_out=np.loadtxt(open("outputmix-124.csv"),
# delimiter=',',dtype=np.float32);
data_in = np.random.normal( size = ( 10000, 4392 ) )
data_out = np.random.normal( size = ( 10000, 124 ) )
x_train = data_in[0:6000, :]
x_test = data_in[6000:10000,:]
y_train = data_out[0:6000, :]
y_test = data_out[6000:10000, :]
batch=600
epochs=10
n = x_test.shape[1] #4392
m = x_train.shape[0] #6000
d = y_test.shape[1] #124
l = y_test.shape[0] #4000
trainX = tf.placeholder(tf.float32, [batch, n], name="X")
trainY = tf.placeholder(tf.float32, [batch, d])
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def maxpool2d(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],
padding='SAME')
def convolutional_neural_network(x):
weights = {'W_c1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'W_c2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'W_fc': tf.Variable(tf.random_normal([18 * 16 * 64, 1024])),
'out': tf.Variable(tf.random_normal([1024, d]))}
biases = {'b_c1': tf.Variable(tf.random_normal([32])),
'b_c2': tf.Variable(tf.random_normal([64])),
'b_fc': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([d]))}
x = tf.reshape(x, shape=[-1,61,72, 1])
conv1 = tf.nn.relu(conv2d(x, weights['W_c1']) + biases['b_c1'])
conv1 = maxpool2d(conv1)
conv2 = tf.nn.relu(conv2d(conv1, weights['W_c2']) + biases['b_c2'])
conv2 = maxpool2d(conv2)
fc = tf.reshape(conv2, [-1, 18 * 16 * 64])
fc = tf.nn.relu(tf.matmul(fc, weights['W_fc']) + biases['b_fc'])
fc = tf.nn.dropout(fc, keep_rate)
output = tf.matmul(fc, weights['out']) + biases['out']
return output
def train_neural_network(x):
prediction = convolutional_neural_network(x)
cost =tf.reduce_mean(tf.pow(prediction-trainY,2))
optimizer = tf.train.AdamOptimizer().minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
epoch_loss = 0
for i in (np.linspace(0,m - batch, m / batch, dtype=np.int32)):
x = x_train[i:i + batch, :]
y = y_train[i:i + batch, :]
sess.run(optimizer, feed_dict={trainX: x, trainY: y})
cost_val = sess.run(cost, feed_dict={trainX: x, trainY: y})
print("Epoch=", '%04d' % (epoch + 1), "loss=", " {:.9f}".format(cost_val))
epoch_loss += cost_val
print('Epoch', epoch, 'completed out of', epochs, 'loss:',
epoch_loss)
train_neural_network(trainX)