我有两类疾病A
和B
。我的训练数据包含28
张图片,包括两个类别。
我已经使用opencv创建了大小调整功能。
def resize_cv(x,width,height):
new_image=cv.resize(x,(width,height))
return new_image
X
包含28张图像的列表。
xx=[]
for i in X:
xx.append(resize_cv(i,196,196)) #resizing happens here
print("__Resized the images__")
def scaling (X):
new=[]
for i in X:
for j in i:
new.append(j/255)
break
return new
def label_encode(y):
from sklearn.preprocessing import LabelBinarizer
ff=LabelBinarizer()
return ff.fit_transform(y)
X=scaling(xx)
y=label_encode(y)
现在我将数据分为训练和测试并创建步长
X_train,X_test, y_train, y_test=split_data(X,y,0.2)
#creating smaller batches
step_size=7
steps = len(X_train)
remaining = steps % step_size
现在,当我进入神经网络时,我已经创建了一个神经网络。
layer_conv1 = create_convolutional_layer(input=x,num_input_channels=num_channels,conv_filter_size=filter_size_conv1,num_filters=num_filters_conv1,name="conv1")
layer_conv1_1 = create_convolutional_layer(input=layer_conv1,num_input_channels=num_filters_conv1,conv_filter_size=filter_size_conv1,num_filters=num_filters_conv1,name="conv2")
layer_conv1_1_1 = create_convolutional_layer(input=layer_conv1_1,num_input_channels=num_filters_conv1,conv_filter_size=filter_size_conv1,num_filters=num_filters_conv1,name="conv3")
max_pool_1=maxpool2d(layer_conv1_1_1,2,name="maxpool_1")
drop_out_1=dropout(max_pool_1,name="dropout_1")
flatten_layer=create_flatten_layer(drop_out_3)
layer_fc2 = create_fc_layer(input=flatten_layer,num_inputs=fc_layer_size,num_outputs=num_classes,use_relu=True)
y_pred = tf.nn.softmax(layer_fc2,name="y_pred")
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y,logits=y_pred))
#Defining objective
train = tf.train.AdamOptimizer(learning_rate=0.00001).minimize(cost)
print ("_____Neural Network Architecture Created Succefully_____")
epochs=10
matches = tf.equal(tf.argmax(y_pred,axis=1),tf.argmax(y,axis=1))
acc = tf.reduce_mean(tf.cast(matches,tf.float32))
#Initializing weights
init = tf.global_variables_initializer()
with tf.Session() as sess:
#writing output to the logs for tensorboard
writer=tf.summary.FileWriter("./logs",sess.graph)
sess.run(init)
for i in range(epochs):
#creating smaller batches
for j in range(0,steps-remaining,step_size):
sess.run([acc,train,cost],feed_dict={x:X_train[j:j+step_size],y:y_train[j:j+step_size]})
错误跟踪:
Traceback (most recent call last):
File "/home/centura/gitlab/moles_model/moles_model/modelversion1.py", line 313, in <module>
sess.run([acc,train,cost],feed_dict={x:X_train,y:y_train})
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1111, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (22, 196, 3) for Tensor 'x:0', which has shape '(?, 196, 196, 3)'
我已经检查了 X 数组中的图片尺寸。 每个图像都保持尺寸(196,196,3),但是
当我在 X_train 中检查图像的尺寸时,每个图像的尺寸为(196,3)。
我没到失踪的地方 196 去。
我正在使用tensorflow-gpu=1.9.0, python 3.6 , pycharm IDE.
答案 0 :(得分:0)
答案很简单,原因是(196,196,3)的转换归因于缩放功能中的额外for循环。
代替使用此代码
def scaling (X):
new=[]
for i in X:
for j in i:
new.append(j/255)
break
return new
我应该避免第二个循环,函数看起来像这样:
def scaling (X):
new=[]
for i in X:
new.append(i/255)
return new