无法编译CNN-LSTM模型图像分类

时间:2020-07-10 06:59:24

标签: python lstm cnn

我计划使用CNN + LSTM将图像分类为4类。

我对如何将CNN和LSTM结合起来并不十分熟悉。

我在尝试编译CNN + LSTM时遇到错误:You must compile your model before using it.

数据集是一系列医学图像。仅使用CNN就能获得大约70%的精度(仅小样本,大约300个样本),所以我决定结合LSTM来查看准确性是否会提高。

from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (LSTM, Dense, Embedding, Dropout, Conv2D, BatchNormalization, Activation,
                                     MaxPooling2D, Flatten, TimeDistributed, SpatialDropout1D)

train_datagen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2, zoom_range=0.2, rotation_range=45,
                                   horizontal_flip=True, vertical_flip=True, validation_split=.2)
validation_datagen = ImageDataGenerator(rescale=1. / 255, validation_split=.2)
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(directory=r'', target_size=(224, 224), color_mode="rgb",
                                                    batch_size=32, class_mode='categorical', shuffle=True, seed=42)
validation_generator = validation_datagen.flow_from_directory(directory=r'', target_size=(224, 224), color_mode="rgb",
                                                              batch_size=32, class_mode='categorical', shuffle=True,
                                                              seed=42)
test_generator = test_datagen.flow_from_directory(directory=r'', target_size=(224, 224), color_mode="rgb",
                                                  batch_size=1, class_mode=None, shuffle=False, seed=42)

num_classes = 4
input_shape = (224, 224, 3)

# input image dimensions
img_rows, img_cols = 224, 224

model = Sequential()
# define CNN model
model.add(TimeDistributed(Conv2D(32, (3, 3), padding='same', input_shape=input_shape)))
model.add(TimeDistributed(BatchNormalization()))
model.add(TimeDistributed(Activation('relu')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))

model.add(TimeDistributed(Conv2D(64, (3, 3))))
model.add(TimeDistributed(BatchNormalization()))
model.add(TimeDistributed(Activation('relu')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Dropout(0.25)))

model.add(TimeDistributed(Flatten()))
model.add(TimeDistributed(Dense(256)))
model.add(TimeDistributed(BatchNormalization()))
model.add(TimeDistributed(Activation('relu')))
model.add(TimeDistributed(Dropout(0.25)))

# define LSTM model
model.add(LSTM(100, input_shape=(5, 1), return_sequences=True))
model.add(LSTM(Embedding(8192, 256)))
model.add(LSTM(SpatialDropout1D(0.3)))
model.add(LSTM(256, dropout=0.3, recurrent_dropout=0.3))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(5, activation='softmax'))

model.compile(loss=keras.losss.categorical_crossentropy, optimizer=RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0),
              metrics=['accuracy'])

STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = validation_generator.n // validation_generator.batch_size
model.fit_generator(generator=train_generator, steps_per_epoch=50, validation_data=validation_generator,
                    validation_steps=STEP_SIZE_VALID, epochs=30)

1 个答案:

答案 0 :(得分:0)

您只需将input_shape=input_shape从Conv2D中移出并将其放入TimeDistributed中。即

model.add(TimeDistributed(Conv2D(32, (3, 3), padding='same'), input_shape=input_shape))