所以我用以下代码编译了一个模型:
def train(model,train_generator,test_generator):
optimizer = Adam(lr=0.0001,decay=1e-6)
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit_generator(train_generator,
epochs=100,
steps_per_epoch=28709 // BATCH_SIZE,
validation_steps=7178 // BATCH_SIZE,
validation_data=test_generator)
我得到这个:
Epoch 1/100
895/897 [============================>.] - ETA: 0s - loss: 1.6074 - accuracy: 0.3578WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 224 batches). You may need to use the repeat() function when building your dataset.
897/897 [==============================] - 12s 13ms/step - loss: 1.6068 - accuracy: 0.3581 - val_loss: 1.4521 - val_accuracy: 0.4432
Epoch 2/100
897/897 [==============================] - 10s 11ms/step - loss: 1.3438 - accuracy: 0.4825
Epoch 3/100
897/897 [==============================] - 10s 11ms/step - loss: 1.2086 - accuracy: 0.5401
Epoch 4/100
897/897 [==============================] - 10s 11ms/step - loss: 1.1010 - accuracy: 0.5804
Epoch 5/100
897/897 [==============================] - 10s 11ms/step - loss: 1.0069 - accuracy: 0.6204
在每个时期结束时,我都看不到val_loss(除了第一个)。
我的代码中缺少什么?
在Google Colab中运行它有什么不同吗?因为我可以在PC上获得val_loss!
谢谢!
答案 0 :(得分:0)
尝试减少batch_size
,即每个时期的步数。它清楚地表明您的输入已用完数据。