model.fit_generator。()返回错误。无效参数

时间:2020-04-16 02:24:09

标签: tensorflow image-processing keras conv-neural-network sequential

以下是我用于训练某些手势的代码。培训数据目录如下 'E:\ build \ set_1 \ training \ palm \ seq_01','E:\ build \ set_1 \ training \ palm \ seq_02'等。

我是follwong的错误在最后几行。我已经尝试了提供的两行,但是它们给出的错误是Invalid Argument错误。我正在jupyter笔记本上运行此代码。

    import tensorflow as tf
from tensorflow import keras
from keras_preprocessing.image import ImageDataGenerator


path = 'E:\build\set_1\training'

training_datagen = ImageDataGenerator(rescale = 1./255)

TRAINING_DIR = 'E:/build/set_1/training/'
train_generator = training_datagen.flow_from_directory(
    TRAINING_DIR,
    target_size = (150,150),
    class_mode= 'categorical',
    batch_size=64

)


VALIDATION_DIR = "E:/build/set_1/test/"
validation_datagen = ImageDataGenerator(rescale = 1./255)

validation_generator = training_datagen.flow_from_directory(
    VALIDATION_DIR,
    target_size=(150,150),
    class_mode='categorical',
    batch_size=64
)

model = tf.keras.models.Sequential([
    # Note the input shape is the desired size of the image 150x150 with 3 bytes color
    # This is the first convolution
    tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)),
    tf.keras.layers.MaxPooling2D(2, 2),
    # The second convolution
    tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    # The third convolution
    tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    # The fourth convolution
    tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    # Flatten the results to feed into a DNN
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.5),
    # 512 neuron hidden layer
    tf.keras.layers.Dense(512, activation='relu'),
    tf.keras.layers.Dense(3, activation='softmax')
])

model.summary()

model.compile(loss='categorical_crossentropy',optimizer = 'rmsprop',
              metrics= ['accuracy'])


history = model.fit_generator(train_generator,steps_per_epoch = train_generator.samples//train_generator.batch_size,epochs = 30,validation_data = validation_generator,validation_steps=validation_generator.samples//validation_generator.batch_size)

history = model.fit(train_generator, epochs=25, validation_data = validation_generator, verbose = 1)

0 个答案:

没有答案