我是Tensorflow和NN的新手。我和MNIST玩了一些东西,但现在我想用自己的照片建立我自己的网络。我制作了带有白点的黑色照片,我想训练网络计算点数。
我的问题是将我的图像数据放入Tensorflow。我googeld很多,并找到了一些信息,并把它放在我自己的代码中。把它放在我预期的不起作用。 所以你有一个提示让我把我的图片数据变成张量流吗?
追踪(最近一次呼叫最后一次):
文件“C:...”,第490行,在apply_op中 preferred_dtype = default_dtype)
文件“C:...”,第741行,在internal_convert_to_tensor中 ret = conversion_func(value,dtype = dtype,name = name,as_ref = as_ref)
文件“C:...”,第614行,在_TensorTensorConversionFunction中 %(dtype.name,t.dtype.name,str(t)))
ValueError:Tensor转换请求dtype int32 for Tensor with dtype
float32:'Tensor(“Variable_2 / read:0”,shape =(5000,100),dtype = float32)'
在处理上述异常期间,发生了另一个异常:
追踪(最近一次通话): 文件“...”,第83行,in trainnetwork(x)的
文件“......”,第74行,在火车网络中 prediction = neuralnetworkmodel(x)
文件“...”,第69行,在neuralnetworkmodel输出= tf.matmul(11,output_layer ['weights'])+ output_layer ['偏见']
文件“......”,第1816行,在matmul中 a,b,transpose_a = transpose_a,transpose_b = transpose_b,name = name)
文件“C:...”,第1217行,在_mat_mul中 transpose_b = transpose_b,name = name)
文件“C:...”,第526行,在apply_op中 inferred_from [input_arg.type_attr]))
TypeError:'MatMul'Op的输入'b'的类型为float32,与参数'a'的int32类型不匹配。
import numpy as np
import glob
import scipy.ndimage
import tensorflow as tf
n_nodes_hl= 5000
n_classes = 101
x=tf.placeholder(tf.float32, [None, 22400])
y=tf.placeholder(tf.int32,[None, n_classes])
label_dataset = []
img_dataset = []
for image_file_name in glob.glob("C:\\Users\\Thorsten\\Desktop\\InteSystem\\Seg\\Baum???_sw_*.png"):
print("loading ... ", image_file_name)
# filename for the correct label
label = int(image_file_name[-6:-4])
# load image data from png files into an array
img_array = scipy.ndimage.imread(image_file_name, flatten=True)
img_dataset.append(img_array)
# One Hot Encoding
label +1
i = 0
label_data = []
while i < 101:
if i == label:
label_data.extend([1])
else:
label_data.extend([0])
i += 1
label_dataset.append(label_data)
label_dataset = np.asarray((label_dataset), dtype=float)
img_dataset = np.asarray((img_dataset), dtype=float)
# feed_x = {x: img_dataset}
# feed_y = {y: label_dataset}
# print(feed_x)
# print(feed_y)
def neuralnetworkmodel(data):
hidden_1_layer = {'weights' : tf.Variable(tf.random_normal([22400, n_nodes_hl])),
'biases' : tf.Variable(tf.random_normal([n_nodes_hl]))}
output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl, n_classes])),
'biases': tf.Variable(tf.random_normal([n_classes]))}
l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']),hidden_1_layer['biases'])
l1 = tf.nn.relu(l1)
output = tf.matmul (l1,output_layer['weights']) + output_layer['biases']
return output
def trainnetwork(x):
prediction = neuralnetworkmodel(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
optimizer = tf.train.GradientDescentOptimizer(.5).minimize(cost)
runs=10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run([optimizer, cost], feed_dict={x: img_dataset, y: label_dataset})
trainnetwork(x)
答案 0 :(得分:1)
It looks like the problem is in the line
output = tf.matmul (11,output_layer['weights']) + output_layer['biases']
Replace it with
output = tf.matmul (l1,output_layer['weights']) + output_layer['biases']
and the particular error you're seeing should go away.