输入形状的“conv3d_1/convolution”(操作:“Conv3D”)从 1 中减去 5 导致的负尺寸大小

时间:2021-04-23 02:24:58

标签: tensorflow keras deep-learning conv-neural-network

我正在尝试在 3dcnn 上训练数据,我使用了以下代码:

# image specification
img_rows,img_cols,img_depth=16,16,15
# CNN Training parameters

batch_size = 2
nb_classes = 6
nb_epoch =50

# number of convolutional filters to use at each layer
nb_filters = [32, 32]

# level of pooling to perform at each layer (POOL x POOL)
nb_pool = [3, 3]

# level of convolution to perform at each layer (CONV x CONV)
nb_conv = [5,5]

定义模型

model = Sequential()
model.add(Convolution3D(nb_filters[0],nb_conv[0], nb_conv[0],nb_conv[0], input_shape=(1, img_rows, img_cols, patch_size), activation='relu'))

model.add(MaxPooling3D(pool_size=(nb_pool[0], nb_pool[0], nb_pool[0])))

model.add(Dropout(0.5))

model.add(Flatten())

model.add(Dense(128, init='normal', activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(nb_classes,init='normal'))

model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='RMSprop')

当我运行它时出现此错误

ValueError: Negative dimension size caused by subtracting 5 from 1 for 'conv3d_1/convolution' (op: 'Conv3D') with input shapes: [?,1,16,16,15], [5,5,5,15,32].

有人可以给我建议一个解决方案吗!

1 个答案:

答案 0 :(得分:0)

模型结构有问题。请添加蓝色标记的代码并思考红色标记的代码中的问题:enter image description here

...完整的代码在这里进行进一步处理:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.models import Sequential
from keras.layers import Convolution3D,MaxPooling3D,Dropout,Flatten,Dense,Activation

# image specification
img_rows,img_cols,img_depth=16,16,15
# CNN Training parameters

batch_size = 2
nb_classes = 6
nb_epoch =50

# number of convolutional filters to use at each layer
nb_filters = [32, 32]

# level of pooling to perform at each layer (POOL x POOL)
nb_pool = [3, 3]

# level of convolution to perform at each layer (CONV x CONV)
nb_conv = [5,5]

#Define also the size of kernel...
kernel_size_definition=(3,3,3)

model = Sequential()
#model.add(Convolution3D(nb_filters[0],nb_conv[0], nb_conv[0],nb_conv[0], input_shape=(1, img_rows, img_cols, patch_size), activation='relu'))
model.add(Convolution3D(nb_filters[0],kernel_size=kernel_size_definition, input_shape=(img_rows, img_cols, img_depth,1), activation='relu'))
model.add(MaxPooling3D(pool_size=(nb_pool[0], nb_pool[0], nb_pool[0])))

#Add the second layer, too...?

model.add(Dropout(0.5))

model.add(Flatten())

#model.add(Dense(128, init='normal', activation='relu'))
model.add(Dense(128, activation='relu'))

model.add(Dropout(0.5))

#model.add(Dense(nb_classes,init='normal'))

#model.add(Activation('softmax'))

model.add(Dense(nb_classes))

model.compile(loss='categorical_crossentropy', optimizer='RMSprop')

#Let's probe the model...
test_input=tf.ones((1,img_rows, img_cols, img_depth,1))

#with a input of shape...
print(test_input.shape)

test_result=model(test_input)

#And see the corresponding output...
print(test_result.shape)