我正在尝试创建一个可以区分阿尔茨海默氏病患者和健康个体的MRI的二进制分类器。
这些是到目前为止的统计信息:
型号:
{
"errorMessage": "Unable to import module 'app': No module named 'jwt'",
"errorType": "Runtime.ImportModuleError"
}
正如您所看到的-这很简单,我故意这样做是为了尝试解决过度拟合的问题。
输出:
model = Sequential([
Conv2D(filters=16, kernel_size=(5, 5), activation='relu', padding = 'same', input_shape=(160,160,3)),
MaxPool2D(pool_size=(2, 2), strides=2),
Flatten(),
Dense(units=2, activation='softmax')
])
到目前为止我已经尝试过的事情:
我实际上是个主意,我不确定该如何前进,所以我将不胜感激任何提示或建议。
我所有的代码:
11/11 [==============================] - 2s 194ms/step - loss: 0.7604 - accuracy: 0.5155 - val_loss: 0.7081 - val_accuracy: 0.5000
Epoch 2/20
11/11 [==============================] - 2s 185ms/step - loss: 0.6885 - accuracy: 0.5223 - val_loss: 0.6942 - val_accuracy: 0.4839
Epoch 3/20
11/11 [==============================] - 2s 185ms/step - loss: 0.6802 - accuracy: 0.5854 - val_loss: 0.6985 - val_accuracy: 0.4931
Epoch 4/20
11/11 [==============================] - 2s 185ms/step - loss: 0.6717 - accuracy: 0.5932 - val_loss: 0.6996 - val_accuracy: 0.4677
Epoch 5/20
11/11 [==============================] - 2s 195ms/step - loss: 0.6512 - accuracy: 0.6175 - val_loss: 0.7124 - val_accuracy: 0.5115
Epoch 6/20
11/11 [==============================] - 2s 185ms/step - loss: 0.6345 - accuracy: 0.6476 - val_loss: 0.7073 - val_accuracy: 0.5253
Epoch 7/20
11/11 [==============================] - 2s 185ms/step - loss: 0.6118 - accuracy: 0.6680 - val_loss: 0.6920 - val_accuracy: 0.5207
Epoch 8/20
11/11 [==============================] - 2s 185ms/step - loss: 0.5817 - accuracy: 0.7068 - val_loss: 0.6964 - val_accuracy: 0.5207
Epoch 9/20
11/11 [==============================] - 2s 184ms/step - loss: 0.5528 - accuracy: 0.7272 - val_loss: 0.7123 - val_accuracy: 0.5161
Epoch 10/20
11/11 [==============================] - 2s 193ms/step - loss: 0.5239 - accuracy: 0.7417 - val_loss: 0.7397 - val_accuracy: 0.5392
Epoch 11/20
11/11 [==============================] - 2s 186ms/step - loss: 0.5106 - accuracy: 0.7427 - val_loss: 0.7551 - val_accuracy: 0.5461
Epoch 12/20
11/11 [==============================] - 2s 197ms/step - loss: 0.4920 - accuracy: 0.7650 - val_loss: 0.7402 - val_accuracy: 0.5438
Epoch 13/20
11/11 [==============================] - 2s 190ms/step - loss: 0.4741 - accuracy: 0.7835 - val_loss: 0.7564 - val_accuracy: 0.5507
Epoch 14/20
11/11 [==============================] - 2s 188ms/step - loss: 0.4591 - accuracy: 0.7767 - val_loss: 0.7445 - val_accuracy: 0.5300
Epoch 15/20
11/11 [==============================] - 2s 185ms/step - loss: 0.4486 - accuracy: 0.7767 - val_loss: 0.7712 - val_accuracy: 0.5415
Epoch 16/20
11/11 [==============================] - 2s 185ms/step - loss: 0.4503 - accuracy: 0.7806 - val_loss: 0.7446 - val_accuracy: 0.5346
Epoch 17/20
11/11 [==============================] - 2s 188ms/step - loss: 0.4404 - accuracy: 0.7670 - val_loss: 0.7669 - val_accuracy: 0.5553
Epoch 18/20
11/11 [==============================] - 2s 184ms/step - loss: 0.4169 - accuracy: 0.8078 - val_loss: 0.7804 - val_accuracy: 0.5576
Epoch 19/20
11/11 [==============================] - 2s 184ms/step - loss: 0.3987 - accuracy: 0.7971 - val_loss: 0.7846 - val_accuracy: 0.5507
Epoch 20/20
11/11 [==============================] - 2s 192ms/step - loss: 0.3977 - accuracy: 0.7981 - val_loss: 0.8060 - val_accuracy: 0.5461
编辑:
This paper似乎比我做得要好得多,并且完成了非常相似的任务,因此查看以下方法可能很有用:
答案 0 :(得分:0)
可以尝试的东西。
答案 1 :(得分:0)
由于缺少数据,您的模型似乎过拟合。您可以进行一些数据扩充以增加您拥有的图像数量。如果您不在乎长宽比,则可以使图像变形;如果不总是需要完整的图像,则可以裁剪图像;如果方向不重要,则可以旋转图像。这些事情可以大大增加数据集的大小,并有助于减轻过度拟合的情况。
以下是tensorflow documentation中的一个示例:
batch_size = 32
AUTOTUNE = tf.data.experimental.AUTOTUNE
def prepare(ds, shuffle=False, augment=False):
# Resize and rescale all datasets
ds = ds.map(lambda x, y: (resize_and_rescale(x), y),
num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1000)
# Batch all datasets
ds = ds.batch(batch_size)
# Use data augmentation only on the training set
if augment:
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y),
num_parallel_calls=AUTOTUNE)
# Use buffered prefecting on all datasets
return ds.prefetch(buffer_size=AUTOTUNE)
另外,here是TensorFlow开发人员youtube频道上的精彩视频,它解释了图像增强的概念并展示了如何实现它的示例。