CNN 模型未在完整数据集上进行训练

时间:2021-07-03 09:22:39

标签: python tensorflow keras deep-learning conv-neural-network

我的数据集是 mnist sign_language_train 图像数据集,大约有 27000 个条目。 我将它分成了一个大约有 21000 个条目的训练集和一个大约有 6000 个条目的验证集。但是,当我将数据拟合到模型中时,它只训练每个时期 687 个条目。

X_train, X_validate, y_train, y_validate = train_test_split(X_train, y_train, test_size = 0.2,random_state = 123)
X_train = X_train.reshape(X_train.shape[0], *(28, 28, 1))
X_test = X_test.reshape(X_test.shape[0], *(28, 28, 1))
X_validate = X_validate.reshape(X_validate.shape[0], *(28, 28, 1))

print(X_train.shape) -->(21964, 28, 28, 1)
print(y_train.shape) -->(21964,)
print(X_validate.shape) -->(5491, 28, 28, 1)
print(y_validate.shape) -->(5491,)

model = Sequential()

model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1)))
model.add(MaxPool2D(pool_size=(2, 2), strides=2))

model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu', input_shape = (28,28,1)))
model.add(MaxPool2D(pool_size=(2, 2), strides=2))

model.add(Conv2D(filters=128, kernel_size=(3, 3), activation='relu', input_shape = (28,28,1)))
model.add(MaxPool2D(pool_size=(2, 2), strides=2))

model.add(Flatten())

model.add(Dense(64,activation ="relu"))
model.add(Dense(128,activation ="relu"))
#model.add(Dropout(0.2))
model.add(Dense(128,activation ="relu"))
#model.add(Dropout(0.3))
model.add(Dense(25,activation ="softmax"))

然后我拟合了上面提到的训练集和验证集,如下所示,对于每个 epoch,它只训练 687 个条目。

model.compile(optimizer=SGD(learning_rate=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history2=model.fit(X_train, y_train, epochs = 50, validation_data = (X_validate, y_validate))

1 个答案:

答案 0 :(得分:1)

我认为您对 keras 日志感到困惑。您在训练时在 Epoch 日志中看到的数字不是样本总数,而是批次总数。您在训练中有 21964 个样本,默认批次大小为 32,因此您将看到 21964/32=686.375=687 次迭代(最后一批未满)。如果您想确认,请将batch_size 设置为1,那么您应该会看到21964 次迭代,每批一个样本。

示例

def train():
  mnist = tf.keras.datasets.mnist
  (x_train, y_train), (x_test, y_test) = mnist.load_data()
  print (x_train.shape)

  #nomalize data
  x_train = tf.keras.utils.normalize(x_train, axis=1)
  x_test = tf.keras.utils.normalize(x_test, axis=1)
  #train model
  model = tf.keras.models.Sequential()
  model.add(tf.keras.layers.Flatten())
  model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
  model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
  model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))

  model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', 
          metrics=['accuracy'])
  model.fit(x_train, y_train, epochs=1)
  return model

model = train()

输出:

(60000, 28, 28)
1875/1875 [=====================] - 5s 2ms/step - loss: 0.2594 - accuracy: 0.9240

在上面的例子中,我们有 60000 个样本和 32 的批量大小(默认),所以我们应该有 1875 次迭代,如纪元日志中所示。