CNN模型在大多数情况下预测相同的值

时间:2020-03-26 11:36:48

标签: python tensorflow machine-learning keras conv-neural-network

我正在尝试训练CNN模型进行图像分类。

共有9个课程,每个课程有1000张图片。

这是我的代码

model = Sequential()

model.add(Conv2D(32, kernel_size=(5,5), activation='relu',kernel_initializer='random_uniform', input_shape=(128,646,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(16, kernel_size=(5,5), activation='relu', input_shape=(64,321,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())

model.add(Flatten())
model.add(Dense(16, activation = 'relu',kernel_initializer='normal'))
model.add(Dropout(0.5))
model.add(Dense(9, activation = 'softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

我已经在MNIST数据集上进行了尝试,它将起作用

但是在我的数据集上,它总是预测相同的值。

(上面为True标签,下面为预测标签值)

[[0. 0. 0. 1. 0. 0. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0. 0. 0. 0.]]
[[0.11161657 0.11246169 0.11564494 0.11465651 0.11153363 0.10664304
  0.11097018 0.11052497 0.10594855]
 [0.11161657 0.11246169 0.11564494 0.11465651 0.11153363 0.10664304
  0.11097018 0.11052497 0.10594855]
 [0.11161657 0.11246169 0.11564494 0.11465651 0.11153363 0.10664304
  0.11097018 0.11052497 0.10594855]
 [0.11161657 0.11246169 0.11564494 0.11465651 0.11153363 0.10664304
  0.11097018 0.11052497 0.10594855]
 [0.11161657 0.11246169 0.11564494 0.11465651 0.11153363 0.10664304
  0.11097018 0.11052497 0.10594855]]

Accuracy curves Loss curves

我试图更改初始化器,优化器,损失函数,更多时期 ...仍然没有任何变化。

但是当我

1.将两个Conv2D内核号都设置为1
2.将第一个Conv2D激活功能设置为tanh

模型开始为输入预测不同的值,但性能不会。

我主要是困惑
1.将内核号设置为1似乎并不常见,但以我的经验,它避免预测相同的值。
2.输入图像的值都大于零,但是将激活功能设置为tanh也会将结果更改为不同的预测类吗?


此外,图像尺寸为128 * 646 每个图像的取值范围是0〜80

array([[[33.74863434],
        [27.84932709],
        [22.6257019 ],
        ...,
        [21.47132492],
        [19.61938477],
        [14.22393227]],

       [[16.31633759],
        [29.69265747],
        [25.40621376],
        ...,
        [28.50727081],
        [11.46302605],
        [ 4.04836655]],

       [[ 9.1305275 ],
        [10.00378227],
        [28.46733665],
        ...,
        [23.54629517],
        [20.91897202],
        [ 1.38314819]],

       ...,

       [[63.33175659],
        [66.34197998],
        [68.40023804],
        ...,
        [73.8707428 ],
        [68.64536285],
        [67.72910309]],

       [[67.61167908],
        [67.59188843],
        [66.96526337],
        ...,
        [70.63095856],
        [74.70448303],
        [72.90202332]],

       [[71.49047852],
        [74.54782104],
        [69.39613342],
        ...,
        [80.        ],
        [80.        ],
        [80.        ]]])

更新

我的数据集来自免费音乐存档

包括轨道音频数据(.mp3)和轨道元数据(流派,艺术家等)

我选择了小版本(8000首曲目),并通过libROSA软件包转换为频谱图。

就像此链接Using CNNs and RNNs for Music Genre Recognition

但是我只想先尝试CNN模型。

libROSA软件包转换后的频谱图的大小为128 * 646。

行数据是这样的

array([[-65.06227 , -47.759537, -44.17627 , ..., -39.40817 , -41.736862,
        -25.19515 ],
       [-65.40295 , -52.76098 , -49.17935 , ..., -16.40555 , -16.314035,
        -17.56438 ],
       [-69.481834, -56.676388, -50.506615, ..., -16.358843, -16.072405,
        -18.807785],
       ...,
       [-79.42308 , -59.743004, -36.382896, ..., -46.371193, -42.364635,
        -50.037727],
       [-80.      , -63.419754, -41.73323 , ..., -50.383797, -46.90663 ,
        -55.136078],
       [-80.      , -73.820724, -52.94601 , ..., -63.188026, -56.469948,
        -60.473305]], dtype=float32)

我将该值除以80,然后计算出这些频谱图的绝对值,作为CNN模型输入

1 个答案:

答案 0 :(得分:0)

此处提及解决方案(答案部分),即使它存在于注释部分中,也是为了社区的利益。也添加了更多建议。

KernelsFilters的数量增加到50以上(例如64)可以提高准确性。这是因为更多的Kernels/Filters可以提高Model的表示能力,尤其是在数据不太容易学习的情况下。

为了改进Accuracy,可以将模型更改为

model = Sequential()

model.add(Conv2D(32, kernel_size=(5,5), activation='relu',kernel_initializer='random_uniform', input_shape=(128,646,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(16, kernel_size=(5,5), activation='relu', input_shape=(64,321,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())

model.add(Flatten())
model.add(Dense(16, activation = 'relu',kernel_initializer='normal'))
model.add(Dropout(0.5))
model.add(Dense(9, activation = 'softmax'))

model = Sequential()

model.add(Conv2D(64, kernel_size=(5,5), 
activation='relu',kernel_initializer='random_uniform', 
input_shape=(128,646,1))) # Increased the Number of Kernels in this Conv2D 
Layer
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=(5,5), activation='relu', 
input_shape=(64,321,1))) # Increased the Number of Kernels in this Conv2D 
Layer
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())

model.add(Flatten())
model.add(Dense(256, activation = 'relu',
kernel_initializer='normal')) #Increased the Number of Neurons in this Dense 
Layer
model.add(Dropout(0.5))
model.add(Dense(9, activation = 'softmax'))

此外,Input Data0 to 80范围内。将其规范化为0 and 1之间的值将得到更好的accuracy

常规预处理步骤如下所示:

from tensorflow.keras.preprocessing import image

Test_Dir = 'Dogs_Vs_Cats_Small/test/cats'
Image_File = os.path.join(Test_Dir, 'cat.1545.jpg')

Image = image.load_img(Image_File, target_size = (128,646))

Image_Tensor = image.img_to_array(Image)

Image_Tensor = tf.expand_dims(Image_Tensor, axis = 0)

Image_Tensor = Image_Tensor/255.0

希望这会有所帮助。学习愉快!