与初学者代码相比,使用TensorFlow 2.0的专家代码得到的培训结果更差

时间:2019-04-13 16:16:43

标签: tensorflow tensorflow2.0

TensorFlow 2.0.0 alpha 教程中,我看到了两个快速入门模板。一个供初学者使用model.fit进行训练,另一个供tf.GradientTape() as tape:optimizer使用。

我尝试了两种代码来训练相同的数据,相同的网络,但是只有初学者才能很好地工作。

输入: [224,224]张图片* 240

网络: VGG19在keras应用程序上

课程数: 24

型号:

vgg = VGG19(input_shape=(224, 224, 3), include_top=False, pooling='avg', weights='imagenet')
vgg.trainable = True
for layer in vgg.layers[:17]:
    layer.trainable = False

model = models.Sequential([vgg, Dense(24, activation='softmax')])

初学者代码:

model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x, y, epochs=total_epochs, batch_size=4)

专家代码:

loss_object = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.RMSprop(base_learning_rate)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')


@tf.function
def train_step(images, labels):
    with tf.GradientTape() as tape:
        predictions = model(images)
        loss = loss_object(labels, predictions)
    gradients = tape.gradient(loss, model.trainable_variables)
    # gradients = [tf.clip_by_norm(g, 5) for g in gradients]
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    train_loss(loss)
    train_accuracy(labels, predictions)


    x, y = read_data('Images')
    dataset = tf.data.Dataset.from_tensor_slices((x, y))
    dataset = dataset.map(cast).repeat(total_epochs).shuffle(300).batch(4)
    for epoch in range(total_epochs):
        for image, label in dataset:
            train_step(image, label)

        template = 'Epoch {}, Loss: {}, Accuracy: {}'
        print(template.format(epoch + 1,
                              train_loss.result(),
                              train_accuracy.result() * 100))

初学者的结果:

1/60 [..............................] - ETA: 14s - loss: 0.0631 - accuracy: 1.0000
2/60 [>.............................] - ETA: 14s - loss: 0.0440 - accuracy: 1.0000
3/60 [>.............................] - ETA: 13s - loss: 0.0340 - accuracy: 1.0000
4/60 [=>............................] - ETA: 13s - loss: 0.0274 - accuracy: 1.0000
5/60 [=>............................] - ETA: 13s - loss: 0.0307 - accuracy: 1.0000
6/60 [==>...........................] - ETA: 13s - loss: 0.0279 - accuracy: 1.0000
7/60 [==>...........................] - ETA: 13s - loss: 0.0261 - accuracy: 1.0000
8/60 [===>..........................] - ETA: 12s - loss: 0.0315 - accuracy: 1.0000
9/60 [===>..........................] - ETA: 12s - loss: 0.0293 - accuracy: 1.0000
10/60 [====>.........................] - ETA: 12s - loss: 0.0272 - accuracy: 1.0000
11/60 [====>.........................] - ETA: 12s - loss: 0.0300 - accuracy: 1.0000
12/60 [=====>........................] - ETA: 11s - loss: 0.0446 - accuracy: 1.0000
13/60 [=====>........................] - ETA: 11s - loss: 0.0416 - accuracy: 1.0000
14/60 [======>.......................] - ETA: 11s - loss: 0.0440 - accuracy: 1.0000
15/60 [======>.......................] - ETA: 11s - loss: 0.0439 - accuracy: 1.0000

专家结果:

Epoch 1, Loss: 4.321621, Accuracy: 3.225641
Epoch 2, Loss: 3.671219, Accuracy: 4.160391
Epoch 3, Loss: 3.451912, Accuracy: 3.561214

0 个答案:

没有答案