这是我模型的实际配置:
def create_model(features):
with C.layers.default_options(init=C.glorot_uniform(), activation=C.ops.relu, pad= True):
h = features
h = C.layers.Convolution2D(filter_shape = (5,5),
num_filters=8, strides = (2,2),
pad=True, name = 'first_conv')(h)
h = C.layers.AveragePooling(filter_shape = (5,5), strides=(2,2))(h)
h = C.layers.Convolution2D(filter_shape = (5,5), num_filters=16, pad = True)(h)
h = C.layers.AveragePooling(filter_shape = (5,5), strides=(2,2))(h)
h = C.layers.Convolution2D(filter_shape = (5,5), num_filters=32, pad = True)(h)
h = C.layers.AveragePooling(filter_shape = (5,5), strides=(2,2))(h)
h = C.layers.Dense(96)(h)
h = C.layers.Dropout(dropout_rate=0.5)(h)
r = C.layers.Dense(num_output_classes, activation= None, name='classify')(h)
return r
z = create_model(x)
# Print the output shapes / parameters of different components
print("Output Shape of the first convolution layer:", z.first_conv.shape)
print("Bias value of the last dense layer:", z.classify.b.value)
我一直在尝试和调整配置,更改参数值,添加和删除图层,但是我的CNN似乎不是从我的数据中学习,在最佳情况下它会收敛到某个点,然后碰到墙,错误停止减少。
我发现learning_rate
和num_minibatches_to_train
参数很重要。我实际上已经设置了learning_rate = 0.2
和num_minibatches_to_train = 128
,我也正在使用sgd
作为学习者。这是我最后一次输出结果的示例:
Minibatch: 0, Loss: 2.4097, Error: 95.31%
Minibatch: 100, Loss: 2.3449, Error: 95.31%
Minibatch: 200, Loss: 2.3751, Error: 90.62%
Minibatch: 300, Loss: 2.2813, Error: 78.12%
Minibatch: 400, Loss: 2.3478, Error: 84.38%
Minibatch: 500, Loss: 2.3086, Error: 87.50%
Minibatch: 600, Loss: 2.2518, Error: 84.38%
Minibatch: 700, Loss: 2.2797, Error: 82.81%
Minibatch: 800, Loss: 2.3234, Error: 84.38%
Minibatch: 900, Loss: 2.2542, Error: 81.25%
Minibatch: 1000, Loss: 2.2579, Error: 85.94%
Minibatch: 1100, Loss: 2.3469, Error: 85.94%
Minibatch: 1200, Loss: 2.3334, Error: 84.38%
Minibatch: 1300, Loss: 2.3143, Error: 85.94%
Minibatch: 1400, Loss: 2.2934, Error: 92.19%
Minibatch: 1500, Loss: 2.3875, Error: 85.94%
Minibatch: 1600, Loss: 2.2926, Error: 90.62%
Minibatch: 1700, Loss: 2.3220, Error: 87.50%
Minibatch: 1800, Loss: 2.2693, Error: 87.50%
Minibatch: 1900, Loss: 2.2864, Error: 84.38%
Minibatch: 2000, Loss: 2.2678, Error: 79.69%
Minibatch: 2100, Loss: 2.3221, Error: 92.19%
Minibatch: 2200, Loss: 2.2033, Error: 87.50%
Minibatch: 2300, Loss: 2.2493, Error: 87.50%
Minibatch: 2400, Loss: 2.4446, Error: 87.50%
Minibatch: 2500, Loss: 2.2676, Error: 85.94%
Minibatch: 2600, Loss: 2.3562, Error: 85.94%
Minibatch: 2700, Loss: 2.3290, Error: 82.81%
Minibatch: 2800, Loss: 2.3767, Error: 87.50%
Minibatch: 2900, Loss: 2.2684, Error: 76.56%
Minibatch: 3000, Loss: 2.3365, Error: 90.62%
Minibatch: 3100, Loss: 2.3369, Error: 90.62%
有什么建议可以改善我的结果吗?我愿意接受任何提示/探索。
提前谢谢
答案 0 :(得分:1)
无论如何,通常在刚开始时要回答这个问题,我建议对于conv层,将filter_shape保持为(3,3),步幅应为1。
对于池化层,请坚持最大池化,直到您对深度学习有所了解为止。对于maxpooling层,filter_shape =(2,2)和stride =(2,2)
通常,您有2-3个转换层,后跟一个最大缓冲层,重复此顺序,直到将尺寸减小到易于使用的程度为止。
对于学习者,您应该使用亚当。它需要最小的调整。您可以将学习速率设为1e-3或1e-4。您可以将动量设置为0.9。
微型批次的大小,将其保持为16或32即可。
此外,当您首次尝试使模型收敛时,请确保其不会丢失。辍学阻碍了融合。确定模型可以正常工作后,重新添加辍学信息以进行正则化。