当使用带Keras的发生器时,n / X中的X表示每个时期打印出来的X.

时间:2017-10-08 00:53:14

标签: tensorflow neural-network keras theano

使用fit生成器并将verbose设置为1时,Keras会打印以下内容(如下所示)。我正在使用14,753个输入图像,已经分配了80%的培训和20%的测试。批量大小为32.

有时我理解X是如何工作的。当您将14,753个观察值* 80%用于训练/ 32个(批量大小)时,您得到368,这有时是X的显示。我怀疑这个数字代表来自生成器的数据产生的次数。但是其他时候,当训练更深层次更复杂的模型时,X要高得多并且显示看似任意的数字> 1000. 如何计算X?

时代1

1/10(1 = n,X = 10)

2/10

3/10

大纪元2

1/10

2/10

3/10 等

神经网络1(X = 368,这是有道理的)

activation_function = 'relu'
epoch_count = 20
loss_function = 'mean_squared_error'
opt = 'adam'

model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(3, 3), input_shape=inp_shape))
model.add(Conv2D(filters=32, kernel_size=(3, 3), input_shape=inp_shape))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Flatten())
model.add(Dense(32, activation=activation_function))
model.add(Dropout(rate=0.5))
model.add(Dense(num_targets))
model.summary()
model.compile(loss=loss_function, optimizer=opt)

hist = model.fit_generator(
    generator(imgIds, batch_size=batch_size, is_train=True),
    validation_data=generator(imgIds, batch_size=batch_size, is_val=True), validation_steps=steps_per_val,
    steps_per_epoch=steps_per_epoch,
    epochs=epoch_count,
    verbose=verbose_level)

神经网络2(X = 1602,我不明白)

activation_function = 'relu'
batch_size = 32  # ##
steps_per_epoch = int(len(imgIds) * 0.8 / batch_size)
steps_per_val = int(len(imgIds) * 0.2 / batch_size)
epoch_count = 20
loss_function = 'mean_squared_error'
opt = 'adam'

model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3), input_shape=inp_shape, activation=activation_function))
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation=activation_function))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation=activation_function))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation=activation_function))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation=activation_function))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation=activation_function))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=128, kernel_size=(3, 3), activation=activation_function))
model.add(Conv2D(filters=128, kernel_size=(3, 3), activation=activation_function))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(300, activation=activation_function))
model.add(Dropout(rate=0.5))
model.add(Dense(200, activation=activation_function))
model.add(Dense(num_targets))
model.summary()
model.compile(loss=loss_function, optimizer=opt)

hist = model.fit_generator(
    generator(imgIds, batch_size=batch_size, is_train=True),
    validation_data=generator(imgIds, batch_size=batch_size, is_val=True), validation_steps=steps_per_val,
    steps_per_epoch=steps_per_epoch,
    epochs=epoch_count,
    verbose=verbose_level)

0 个答案:

没有答案