我修改了https://www.tensorflow.org/get_started/mnist/pros/以将其构建为图像分割,而不是分类问题。输入是60x60下采样的MRI图像(重新成形为[1,3600]),输出是0到1范围的分段(阈值为0.5到二进制掩模)。当我运行它时,我在训练集中得到非常合理的分段和高骰子(0.99)。但是,测试集只达到0.8的骰子。这听起来像过度拟合,但模型非常简单:conv layer-max pool-conv layer-dropout-prediction。所以,只有3组权重和偏差,我不确定过度复杂是什么问题。对于正规化,我使用10%的辍学率。我尝试了50%的辍学,以及L1规范正规化 - 它没有产生任何影响。我最初使用300个图像作为训练集,184个作为测试集。高达740-740,它没有任何区别。测试组的骰子几乎完全坚持0.8。当我运行代码假装训练数据是测试数据时,我得到几乎相同(但不完全相同)的骰子。我非常感谢你的建议。
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
# Pooling layer - downsamples by 2X.
h_pool1 = max_pool_2x2(h_conv1)
# Second convolutional layer -- maps 32 feature maps to 64.
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
# Second pooling layer.
h_pool2 = max_pool_2x2(h_conv2)
# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image
# is down to 7x7x64 feature maps -- maps this to 1024 features.
W_fc1 = weight_variable([15 * 15 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 15*15*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# Dropout - controls the complexity of the model, prevents co-adaptation of
# features.
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# Map the 1024 features to 10 classes, one for each digit
W_fc2 = weight_variable([1024, 3600])
b_fc2 = bias_variable([3600])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
P.S。:我的损失函数是均方差tf.reduce_mean(tf.reduce_mean(tf.multiply(y_-y_conv,y_-y_conv))),而不是明确的骰子。
答案 0 :(得分:0)
确实过度拟合:如果您想比较训练样本的数量和参数的数量,参数的数量远远超过示例的数量。然而,即使过度拟合,我也不认为骰子0.8很低。
建议:您可能想了解完全卷积网络(FCN)。