我目前正在使用三种类型的车辆(货车/ SUV,轿车和卡车)来训练图像分类模型。我有1800张训练图像和210张验证图像。当我尝试插入数据时。我用keras.preprocessing.image.ImageDataGenerator()
和Val_Data.flow(
预处理数据。由于我的精度保持不变,因此似乎绝对正在发生。以下是我的代码和结果。我已经尝试解决了很长时间,似乎无法解决此问题。
代码:
# Creating Training Data Shuffled and Organized
Train_Data = keras.preprocessing.image.ImageDataGenerator()
Train_Gen = Train_Data.flow(
Train_Img,
Train_Labels,
batch_size=BATCH_SIZE,
shuffle=True)
# Creating Validation Data Shuffled and Organized
Val_Data = keras.preprocessing.image.ImageDataGenerator()
Val_Gen = Val_Data.flow(
Train_Img,
Train_Labels,
batch_size=BATCH_SIZE,
shuffle=True)
print(Train_Gen)
###################################################################################
###################################################################################
#Outline the Model
hidden_layer_size = 300
output_size = 3
#Model Core
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_HEIGHT,IMG_WIDTH,CHANNELS)),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(output_size, activation = 'softmax')
])
custom_optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
#Compile Model
model.compile(optimizer='adam', loss ='sparse_categorical_crossentropy', metrics = ['accuracy'])
#Train Model
NUM_EPOCHS = 15;
model.fit(Train_Gen, validation_steps = 10, epochs = NUM_EPOCHS, validation_data = Val_Gen, verbose = 2)
结果:
180/180 - 27s - loss: 10.7153 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 2/15
180/180 - 23s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 3/15
180/180 - 23s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 4/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 5/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 6/15
180/180 - 21s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 7/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 8/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 9/15
180/180 - 23s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 10/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 11/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 12/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 13/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 14/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 15/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
答案 0 :(得分:0)
我想您首先要研究的是卷积神经网络,因为我可以看到您正在尝试使用密集网络来解决基于图像的问题。它可以工作,但不如CNN好。
有很多原因导致模型卡在张量流中,这是我最常遇到的原因:
首先要使自定义优化器的学习率低于默认值,这意味着:
custom_optimizer = tf.keras.optimizers.Adam(lr=0.0001)
model.compile(optimizer=custom_optimizer, loss="sparse_categorical_crossentropy", metrics = ['acc'])
检查此链接,它是您附近的CNN实施 https://gist.github.com/RyanAkilos/3808c17f79e77c4117de35aa68447045