使用以下{@ {3}}
中的斯坦福狗数据集时,我的训练和验证非常低我可以知道哪里出了问题,如何提高验证准确性?谢谢
这是我的代码:
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import tensorflow_hub as hub
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=40,
width_shift_range=0.2, height_shift_range=0.2,
shear_range=0.2, zoom_range=0.2, horizontal_flip=True,
fill_mode="nearest", validation_split=0.1)
BATCH_SIZE = 32
STEPS_PER_EPOCH = np.ceil((20580*0.8)/BATCH_SIZE)
train_generator = image_generator.flow_from_directory(
r"\119698_791828_bundle_archive\images\Images",
target_size=(224, 224),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical',
subset='training')
validation_generator = image_generator.flow_from_directory(
r"\119698_791828_bundle_archive\images\Images",
target_size=(224, 224),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical',
subset='validation')
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(224, 224, 3)),
tf.keras.layers.Conv2D(16, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(32, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation="relu"),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(120, activation='softmax')
])
model.compile(optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=['acc'])
model.fit(train_generator, validation_data=validation_generator, steps_per_epoch=STEPS_PER_EPOCH, epochs=15)
输出:
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 515.0 steps, validate for 63 steps
Epoch 1/15
515/515 [==============================] - 502s 975ms/step - loss: 4.7840 - acc: 0.0166 - val_loss: 4.9392 - val_acc: 0.0154
Epoch 2/15
515/515 [==============================] - 268s 521ms/step - loss: 4.4950 - acc: 0.0354 - val_loss: 4.4074 - val_acc: 0.0423
Epoch 3/15
515/515 [==============================] - 314s 610ms/step - loss: 4.3337 - acc: 0.0550 - val_loss: 4.3654 - val_acc: 0.0562
Epoch 4/15
515/515 [==============================] - 266s 516ms/step - loss: 4.2299 - acc: 0.0658 - val_loss: 4.2559 - val_acc: 0.0627
Epoch 5/15
515/515 [==============================] - 231s 448ms/step - loss: 4.1500 - acc: 0.0743 - val_loss: 4.2295 - val_acc: 0.0732
Epoch 6/15
515/515 [==============================] - 232s 451ms/step - loss: 4.1103 - acc: 0.0815 - val_loss: 4.1339 - val_acc: 0.0881
Epoch 7/15
515/515 [==============================] - 229s 444ms/step - loss: 4.0634 - acc: 0.0860 - val_loss: 4.1033 - val_acc: 0.0841
Epoch 8/15
515/515 [==============================] - 233s 453ms/step - loss: 4.0332 - acc: 0.0914 - val_loss: 4.0654 - val_acc: 0.0986
Epoch 9/15
515/515 [==============================] - 237s 460ms/step - loss: 3.9903 - acc: 0.0911 - val_loss: 4.1224 - val_acc: 0.0876
Epoch 10/15
515/515 [==============================] - 249s 483ms/step - loss: 3.9787 - acc: 0.0985 - val_loss: 4.0670 - val_acc: 0.1050
Epoch 11/15
515/515 [==============================] - 250s 486ms/step - loss: 3.9668 - acc: 0.1014 - val_loss: 4.1024 - val_acc: 0.0836
Epoch 12/15
515/515 [==============================] - 453s 879ms/step - loss: 3.9535 - acc: 0.0999 - val_loss: 3.9681 - val_acc: 0.1025
Epoch 13/15
515/515 [==============================] - 375s 729ms/step - loss: 3.9728 - acc: 0.1033 - val_loss: 4.0681 - val_acc: 0.0996
Epoch 14/15
515/515 [==============================] - 530s 1s/step - loss: 3.9487 - acc: 0.1024 - val_loss: 3.9612 - val_acc: 0.1025
Epoch 15/15
515/515 [==============================] - 382s 741ms/step - loss: 3.9396 - acc: 0.1058 - val_loss: 3.9932 - val_acc: 0.1045
这是模型摘要:
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_6 (Conv2D) (None, 222, 222, 16) 448
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 111, 111, 16) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 109, 109, 32) 4640
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 54, 54, 32) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 52, 52, 64) 18496
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 26, 26, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 43264) 0
_________________________________________________________________
dense_4 (Dense) (None, 1024) 44303360
_________________________________________________________________
dropout_2 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_5 (Dense) (None, 120) 123000
=================================================================
Total params: 44,449,944
Trainable params: 44,449,944
Non-trainable params: 0
此外,训练模型时如何删除警告?谢谢大家。
编辑:我已经将模型训练了15个时期,但损失仍然很小甚至没有变化
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 515.0 steps, validate for 63 steps
Epoch 1/15
515/515 [==============================] - 553s 1s/step - loss: 3.9552 - acc: 0.1045 - val_loss: 3.9564 - val_acc: 0.1075
Epoch 2/15
515/515 [==============================] - 266s 516ms/step - loss: 3.9427 - acc: 0.1017 - val_loss: 4.0370 - val_acc: 0.0921
Epoch 3/15
515/515 [==============================] - 266s 517ms/step - loss: 3.9321 - acc: 0.1054 - val_loss: 3.9974 - val_acc: 0.0921
Epoch 4/15
515/515 [==============================] - 289s 560ms/step - loss: 3.9282 - acc: 0.1077 - val_loss: 4.0145 - val_acc: 0.0961
Epoch 5/15
515/515 [==============================] - 334s 648ms/step - loss: 3.9279 - acc: 0.1049 - val_loss: 4.1821 - val_acc: 0.0811
Epoch 6/15
515/515 [==============================] - 387s 752ms/step - loss: 3.9530 - acc: 0.1079 - val_loss: 4.0147 - val_acc: 0.0971
Epoch 7/15
515/515 [==============================] - 408s 792ms/step - loss: 3.9587 - acc: 0.1035 - val_loss: 4.0351 - val_acc: 0.0966
Epoch 8/15
515/515 [==============================] - 246s 477ms/step - loss: 3.9525 - acc: 0.0999 - val_loss: 3.9847 - val_acc: 0.0946
Epoch 9/15
515/515 [==============================] - 254s 494ms/step - loss: 3.9628 - acc: 0.1030 - val_loss: 4.0428 - val_acc: 0.1025
Epoch 10/15
515/515 [==============================] - 237s 460ms/step - loss: 3.9671 - acc: 0.1047 - val_loss: 4.2874 - val_acc: 0.0951
Epoch 11/15
515/515 [==============================] - 228s 444ms/step - loss: 3.9597 - acc: 0.1032 - val_loss: 4.4911 - val_acc: 0.0971
Epoch 12/15
515/515 [==============================] - 248s 481ms/step - loss: 3.9674 - acc: 0.1052 - val_loss: 4.0222 - val_acc: 0.0966
Epoch 13/15
515/515 [==============================] - 255s 496ms/step - loss: 3.9799 - acc: 0.0986 - val_loss: 4.1341 - val_acc: 0.0836
Epoch 14/15
515/515 [==============================] - 255s 495ms/step - loss: 3.9978 - acc: 0.0968 - val_loss: 4.2690 - val_acc: 0.0762
Epoch 15/15
515/515 [==============================] - 254s 493ms/step - loss: 3.9963 - acc: 0.0990 - val_loss: 4.1857 - val_acc: 0.0772
答案 0 :(得分:0)
我相信您的模型还不够复杂。我强烈建议您使用MobileNet模型进行转移学习。您的模型具有4400万个参数,因此计算量很大。 MobileNet模型只有400万个参数,因此速度更快。 [此处。] [1]有关使用MobileNet的文档,我也建议使用可调学习率。 Keras回调ReduceLROnPlateau提供了一种简便的方法。文档在[here。] [2]进行设置,以监视验证丢失,如果学习失败,则将学习率降低0.8倍。最后,我建议使用Keras回调ModelCheckpoint。文档在[here。] [3]进行设置,以监视验证损失并保存损失最小的模型。然后使用该模型对测试集进行预测。最后,我建议使用Adamax。文档在[这里。] [4]。使用初始学习率,004。下面的代码显示了使用MobileNet的设置。
jQuery(document).ready(function() {
if($('.first').val().length == 0 && $('.second').val().length == 0){
$(".dash").hide();
}
});