我是CNN的新手,我无法确定如何解决这个问题。 在这段代码中,我正在训练一组图像以从卷积网络中获取掩模。图像是灰度的形状(200,200)。我无法确定我在哪里犯了错误。每次我运行我的代码时,在不同的输入上都会出现错误。任何帮助都将不胜感激。
以下是生成的日志:
Creating training images...
Saving to .npy files done.
Creating test images...
Saving to .npy files done.
------------------------------
Loading and preprocessing train data...
------------------------------
------------------------------
Creating and compiling model...
------------------------------
C:/Users/Asus/Desktop/training.py:101: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(25, (3, 3), activation="relu", padding="same", data_format="channels_last")`
conv2 = Conv2D(25, (3, 3), activation='relu', padding='same',dim_ordering="th")(inputs)
C:/Users/Asus/Desktop/training.py:102: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(25, (3, 3), activation="relu", padding="same", data_format="channels_first")`
conv2 = Conv2D(25, (3, 3), activation='relu', padding='same',dim_ordering="th")(conv2)
C:/Users/Asus/Desktop/training.py:103: UserWarning: Update your `MaxPooling2D` call to the Keras 2 API: `MaxPooling2D(pool_size=(2, 2), data_format="channels_last")`
pool2 = MaxPooling2D(pool_size=(2, 2), dim_ordering="tf")(conv2)
C:/Users/Asus/Desktop/training.py:105: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(50, (3, 3), activation="relu", padding="same", data_format="channels_first")`
conv3 = Conv2D(50, (3, 3), activation='relu', padding='same',dim_ordering="th")(pool2)
C:/Users/Asus/Desktop/training.py:106: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(50, (3, 3), activation="relu", padding="same", data_format="channels_first")`
conv3 = Conv2D(50, (3, 3), activation='relu', padding='same',dim_ordering="th")(conv3)
C:/Users/Asus/Desktop/training.py:107: UserWarning: Update your `MaxPooling2D` call to the Keras 2 API: `MaxPooling2D(pool_size=(2, 2), data_format="channels_last")`
pool3 = MaxPooling2D(pool_size=(2, 2),dim_ordering="tf")(conv3)
C:/Users/Asus/Desktop/training.py:109: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(100, (3, 3), activation="relu", padding="same", data_format="channels_first")`
conv4 = Conv2D(100, (3, 3), activation='relu', padding='same',dim_ordering="th")(pool3)
C:/Users/Asus/Desktop/training.py:110: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(100, (3, 3), activation="relu", padding="same", data_format="channels_first")`
conv4 = Conv2D(100, (3, 3), activation='relu', padding='same',dim_ordering="th")(conv4)
C:/Users/Asus/Desktop/training.py:111: UserWarning: Update your `MaxPooling2D` call to the Keras 2 API: `MaxPooling2D(pool_size=(2, 2), data_format="channels_last")`
pool4 = MaxPooling2D(pool_size=(2, 2), dim_ordering="tf")(conv4)
C:/Users/Asus/Desktop/training.py:113: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(200, (3, 3), activation="relu", padding="same", data_format="channels_first")`
conv5 = Conv2D(200, (3, 3), activation='relu', padding='same',dim_ordering="th")(pool4)
C:/Users/Asus/Desktop/training.py:114: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(200, (3, 3), activation="relu", padding="same", data_format="channels_first")`
conv5 = Conv2D(200, (3, 3), activation='relu', padding='same',dim_ordering="th")(conv5)
C:/Users/Asus/Desktop/training.py:116: UserWarning: Update your `Conv2DTranspose` call to the Keras 2 API: `Conv2DTranspose(200, (2, 2), strides=(2, 2), padding="same", data_format="channels_first")`
up6 = concatenate([Conv2DTranspose(200, (2, 2), strides=(2, 2), padding='same',dim_ordering="th")(conv5), conv4], axis=3)
Traceback (most recent call last):
File "<ipython-input-25-4b34507d9da0>", line 1, in <module>
runfile('C:/Users/Asus/Desktop/training.py', wdir='C:/Users/Asus/Desktop')
File "C:\Users\Asus\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\Users\Asus\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Asus/Desktop/training.py", line 205, in <module>
train_and_predict()
File "C:/Users/Asus/Desktop/training.py", line 163, in train_and_predict
model = get_unet()
File "C:/Users/Asus/Desktop/training.py", line 116, in get_unet
up6 = concatenate([Conv2DTranspose(200, (2, 2), strides=(2, 2), padding='same',dim_ordering="th")(conv5), conv4], axis=3)
File "C:\Users\Asus\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\keras\layers\merge.py", line 641, in concatenate
return Concatenate(axis=axis, **kwargs)(inputs)
File "C:\Users\Asus\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\keras\engine\topology.py", line 594, in __call__
self.build(input_shapes)
File "C:\Users\Asus\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\keras\layers\merge.py", line 354, in build
'Got inputs shapes: %s' % (input_shape))
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 200, 50, 50), (None, 100, 50, 25)]
这是我的代码:
#load dataset
import h5py
h5f = h5py.File('liver_augmented_dataset.h5', 'r')
X = h5f['ct_scans'][:]
Y = h5f['seg_mask'][:]
h5f.close()
X_ax = X[1310:2500]
Y_ax = Y[1310:2500]
X_t=X[2501:2619]
Y_t=Y[2501:2619]
image_rows = 200
image_cols = 200
def get_unet():
inputs = Input(shape=(img_rows, img_cols,1))
# conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
# conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
# pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(25, (3, 3), activation='relu', padding='same',dim_ordering="tf")(inputs)
conv2 = Conv2D(25, (3, 3), activation='relu', padding='same',dim_ordering="tf")(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2), dim_ordering="tf")(conv2)
conv3 = Conv2D(50, (3, 3), activation='relu', padding='same',dim_ordering="tf")(pool2)
conv3 = Conv2D(50, (3, 3), activation='relu', padding='same',dim_ordering="tf")(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2),dim_ordering="tf")(conv3)
conv4 = Conv2D(100, (3, 3), activation='relu', padding='same',dim_ordering="tf")(pool3)
conv4 = Conv2D(100, (3, 3), activation='relu', padding='same',dim_ordering="tf")(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2), dim_ordering="tf")(conv4)
conv5 = Conv2D(200, (3, 3), activation='relu', padding='same',dim_ordering="tf")(pool4)
conv5 = Conv2D(200, (3, 3), activation='relu', padding='same',dim_ordering="tf")(conv5)
up6 = concatenate([Conv2DTranspose(200, (2, 2), strides=(2, 2), padding='same',dim_ordering="tf")(conv5), conv4], axis=3)
conv6 = Conv2D(100, (3, 3), activation='relu', padding='same',dim_ordering="tf")(up6)
conv6 = Conv2D(100, (3, 3), activation='relu', padding='same',dim_ordering="tf")(conv6)
up7 = concatenate([Conv2DTranspose(100, (2, 2), strides=(2, 2), padding='same',dim_ordering="tf")(conv6), conv3], axis=3)
conv7 = Conv2D(50, (3, 3), activation='relu', padding='same',dim_ordering="tf")(up7)
conv7 = Conv2D(50, (3, 3), activation='relu', padding='same',dim_ordering="tf")(conv7)
up8 = concatenate([Conv2DTranspose(50, (2, 2), strides=(2, 2), padding='same',dim_ordering="tf")(conv7), conv2], axis=3)
conv8 = Conv2D(25, (3, 3), activation='relu', padding='same',dim_ordering="tf")(up8)
conv8 = Conv2D(25, (3, 3), activation='relu', padding='same',dim_ordering="tf")(conv8)
#
# up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
# conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
# conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)
conv10 = Conv2D(1, (1, 1), activation='sigmoid')(conv8)
model = Model(inputs=[inputs], outputs=[conv10])
model.compile(optimizer=Adam(lr=1e-5), loss=dice_coef_loss, metrics=[dice_coef])
return model
答案 0 :(得分:0)
我能够成功编译模型。 我无法重新创建日志中提到的Concatenate错误。
你要检查的另一个是你提供给模型的输入应该在4维中重新塑造,就像你提到的重塑错误(1190,200,200)一样,你应该把它转换为(1190,200,200, 1)&#39; 1&#39;是为了乐队的数量。
所以基本上你应该为灰度图像添加一个额外的尺寸并将其转换为(img_rows,img_cols,bands)
答案 1 :(得分:0)
我遇到了与灰度图像相同的情况,对图像进行重塑将通过为灰度通道添加额外的尺寸来解决。
train_images_reshape = train_images.reshape(no_images_train, h,w,1)
test_images_reshape = test_images.reshape(no_images_test, h,w,1)
答案 2 :(得分:0)
喀拉拉邦需要额外的尺寸来指定渠道
格式为(no_of_images,高度,宽度,n_channels) n_channels = 1(用于灰度图像) RGB = 3