使用InceptionV3的转移学习时,在1个时间段后未显示验证准确性

时间:2020-08-29 07:55:17

标签: python tensorflow keras tensorflow2.0

我正在尝试建立图像分类器,以将图像区分为泵,涡轮机和PCB类。我正在从Inception V3中使用转移学习。

下面是我的代码,用于初始化InceptionV3

import os

from tensorflow.keras import layers
from tensorflow.keras import Model
!wget --no-check-certificate \
    https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
    -O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
  
from tensorflow.keras.applications.inception_v3 import InceptionV3

local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'

pre_trained_model = InceptionV3(input_shape = (150, 150, 3), 
                                include_top = False, 
                                weights = None)

pre_trained_model.load_weights(local_weights_file)

for layer in pre_trained_model.layers:
  layer.trainable = False
  
# pre_trained_model.summary()

last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output

接下来,我将DNN连接到预先训练的模型:

from tensorflow.keras.optimizers import RMSprop

# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)                  
x = layers.Dense  (3, activation='softmax')(x)           

model = Model( pre_trained_model.input, x) 

model.compile(optimizer = RMSprop(lr=0.0001), 
              loss = 'categorical_crossentropy', 
              metrics = ['accuracy'])

我使用ImageDataGenerator输入图像并按如下所示训练模型:

history = model.fit(
            train_generator,
            validation_data = validation_generator,
            steps_per_epoch = 100,
            epochs = 20,
            validation_steps = 50,
            verbose = 2)

但是,在第一个时期之后不会打印/生成验证准确性:

Epoch 1/20
/usr/local/lib/python3.6/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data.  Expecting to read 4 bytes but only got 0. 
  warnings.warn(str(msg))
/usr/local/lib/python3.6/dist-packages/PIL/Image.py:932: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
  "Palette images with Transparency expressed in bytes should be "
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 50 batches). You may need to use the repeat() function when building your dataset.
100/100 - 43s - loss: 0.1186 - accuracy: 0.9620 - val_loss: 11.7513 - val_accuracy: 0.3267
Epoch 2/20
100/100 - 41s - loss: 0.1299 - accuracy: 0.9630
Epoch 3/20
100/100 - 39s - loss: 0.0688 - accuracy: 0.9840
Epoch 4/20
100/100 - 39s - loss: 0.0826 - accuracy: 0.9785
Epoch 5/20
100/100 - 39s - loss: 0.0909 - accuracy: 0.9810
Epoch 6/20
100/100 - 39s - loss: 0.0523 - accuracy: 0.9845
Epoch 7/20
100/100 - 38s - loss: 0.0976 - accuracy: 0.9835
Epoch 8/20
100/100 - 39s - loss: 0.0802 - accuracy: 0.9795
Epoch 9/20
100/100 - 39s - loss: 0.0612 - accuracy: 0.9860
Epoch 10/20
100/100 - 40s - loss: 0.0729 - accuracy: 0.9825
Epoch 11/20
100/100 - 39s - loss: 0.0601 - accuracy: 0.9870
Epoch 12/20
100/100 - 39s - loss: 0.0976 - accuracy: 0.9840
Epoch 13/20
100/100 - 39s - loss: 0.0591 - accuracy: 0.9815
Epoch 14/20

我不了解阻止打印/生成验证准确性的原因。如果绘制一条精度与验证精度的关系图,并显示以下消息,则会出现错误:

ValueError: x and y must have same first dimension, but have shapes (20,) and (1,)

我在这里想念什么?

2 个答案:

答案 0 :(得分:1)

终于奏效了,如果有人遇到此类问题,请在此处发布我的更改。

因此,我将InceptionV3中的“权重”参数从“无”更改为“ imagenet”,并计算了每个时期的步骤和验证步骤,如下所示:

steps_per_epoch = np.ceil(no_of_training_images / batch_size)

validation_steps = np.ceil(no_ofvalidation_images / batch_size)

答案 1 :(得分:0)

如您所见WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least ``steps_per_epoch * epochs`` batches (in this case, 50 batches). You may need to use the repeat() function when building your dataset.
为确保您具有“至少steps_per_epoch * epochs个批次”,请将steps_per_epoch设置为:

steps_per_epoch = X_train.shape[0]//batch_size