与Keras一起进行的CNN-令人难以置信的低和负亏损-显然是我的错误

时间:2019-09-19 09:05:44

标签: python tensorflow keras

我正在尝试构建一个CNN,将与皮肤癌相关的图像分类为七个类别。 我对CNN的概念还比较陌生,并且一直在将狗/猫分类用例适应已知的皮肤癌数据库挑战。 问题在于,损失和准确性极低,并且在各个时期都是恒定的。 不过,我不确定问题出在哪里-我的第一个假设是使用的图像数太少:用于训练和109个验证的436个样本。由于我使用的是Macbook Pro,我将使用的图像数量从10000+开始减少。

脚本:

    import tensorflow as tf
    from tensorflow.keras.models import Sequential
    from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D

    import numpy as np
    import pandas as pd

    import matplotlib.pyplot as plt

    import sys
    import os
    import cv2

    DATA_DIR = "/Users/namefolder/PycharmProjects/skin-cancer/HAM10000_images_part_1"

    metadata = pd.read_csv(os.path.join(DATA_DIR, 'HAM10000_metadata.csv'))

    lesion_type_dict = {'nv': 'Melanocytic nevi',
        'mel': 'Melanoma',
        'bkl': 'Benign keratosis-like lesions ',
        'bcc': 'Basal cell carcinoma',
        'akiec': 'Actinic keratoses',
        'vasc': 'Vascular lesions',
        'df': 'Dermatofibroma'}

    metadata['cell_type'] = metadata['dx'].map(lesion_type_dict.get)
    metadata['dx_code'] = pd.Categorical(metadata['dx']).codes

    # save array of image-id and diagnosis-type (categorical)
    metadata = metadata[['image_id', 'dx', 'dx_type', 'dx_code']]

    training_data = []

    IMG_SIZE=40

    # preparing training data

    def creating_training_data(path):
        for img in os.listdir(path):
            try:
                img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
                new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
                for index, row in metadata.iterrows():
                    if img == row['image_id']+'.jpg':
                        try:
                            training_data.append([new_array, row['dx_code']])
                        except Exception as ee:
                            pass
            except Exception as e:
                pass

        return training_data

    training_data = creating_training_data(DATA_DIR)

    import random

    random.shuffle(training_data)

    # Splitting data into X features and Y label
    X_train = []
    y_train = []
    for features, label in training_data:
        X_train.append(features)
        y_train.append(label)

    # Reshaping of the data - required by Tensorflow and Keras (*necessary step of deep-learning using these repos)
    X_train = np.array(X_train).reshape(-1, IMG_SIZE, IMG_SIZE, 1)

    # Normalize data - to reduce processing requirements
    X_train = X_train/255.0

    # model configuration
    model = Sequential()
    model.add(Conv2D(64, (3,3), input_shape = X_train.shape[1:]))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2, 2)))

    model.add(Conv2D(64, (3,3)))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2, 2)))

    model.add(Flatten())
    model.add(Dense(64))

    model.add(Dense(1))
    model.add(Activation("softmax"))

    model.compile(loss="mean_squared_error",
                 optimizer="adam",
                 metrics=["accuracy"])

培训模式:

 Model fitting output:

            Train on 436 samples, validate on 109 samples
            Epoch 1/20
            436/436 [==============================] - 1s 2ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 2/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 3/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 4/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 5/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 6/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 7/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 8/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 9/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 10/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 11/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 12/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 13/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 14/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 15/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 16/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 17/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 18/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 19/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642
            Epoch 20/20
            436/436 [==============================] - 1s 1ms/sample - loss: 11.7890 - acc: 0.0688 - val_loss: 13.6697 - val_acc: 0.0642

模型摘要:

Model: "sequential_16"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_30 (Conv2D)           (None, 38, 38, 64)        640       
_________________________________________________________________
activation_44 (Activation)   (None, 38, 38, 64)        0         
_________________________________________________________________
max_pooling2d_30 (MaxPooling (None, 19, 19, 64)        0         
_________________________________________________________________
conv2d_31 (Conv2D)           (None, 17, 17, 64)        36928     
_________________________________________________________________
activation_45 (Activation)   (None, 17, 17, 64)        0         
_________________________________________________________________
max_pooling2d_31 (MaxPooling (None, 8, 8, 64)          0         
_________________________________________________________________
flatten_14 (Flatten)         (None, 4096)              0         
_________________________________________________________________
dense_28 (Dense)             (None, 64)                262208    
_________________________________________________________________
dense_29 (Dense)             (None, 1)                 65        
_________________________________________________________________
activation_46 (Activation)   (None, 1)                 0         
=================================================================
Total params: 299,841
Trainable params: 299,841
Non-trainable params: 0

如果可能的话,有人可以建议我吗?您看到我需要更改/修复的其他区域吗?

谢谢!

2 个答案:

答案 0 :(得分:2)

您正在使用

model.add(Dense(1))
model.add(Activation("softmax"))

即只有一个具有softmax的神经元的密集层?这不起作用,您至少需要将输出的维数设置为2才能使用softmax。

您的标签如何显示?

答案 1 :(得分:1)

您好,在这里写我的建议是因为我尚未获得发表评论的权利。

首先,假设您需要更多数据,可能会完全正确。另外,您可能会考虑到数据可能会偏斜,这样一类可以在数据中更频繁地出现。我真的不知道您是如何选择样本的,但是您可能要注意小样本中类的真实分布。

关于建议,我不确定您要预测什么,但是我想您想确定图像是否癌变。在这种情况下,您会遇到猫和狗等二进制分类问题。因此,应在输出层中使用“ Sigmoid”激活函数,而不要使用“ softmax”。 Softmax通常用于多种分类。

更多,我看不到您的代码有任何更深层次的问题。因此,尝试更改激活功能,并尽可能使用分布正确的更多样本。

希望这会有所帮助:)