训练深度学习模型时出错

时间:2020-07-21 09:01:12

标签: python machine-learning keras deep-learning tensorflow2.0

所以我设计了一个CNN并使用以下参数进行编译,

training_file_loc = "8-SignLanguageMNIST/sign_mnist_train.csv"
testing_file_loc = "8-SignLanguageMNIST/sign_mnist_test.csv"

def getData(filename):
    images = []
    labels = []
    with open(filename) as csv_file:
        file = csv.reader(csv_file, delimiter = ",")
        next(file, None)
        
        for row in file:
            label = row[0]
            data = row[1:]
            img = np.array(data).reshape(28,28)
            
            images.append(img)
            labels.append(label)
        
        images = np.array(images).astype("float64")
        labels = np.array(labels).astype("float64")
        
    return images, labels

training_images, training_labels = getData(training_file_loc)
testing_images, testing_labels = getData(testing_file_loc)

print(training_images.shape, training_labels.shape)
print(testing_images.shape, testing_labels.shape)

training_images = np.expand_dims(training_images, axis = 3)
testing_images = np.expand_dims(testing_images, axis = 3)

training_datagen = ImageDataGenerator(
    rescale = 1/255,
    rotation_range = 45,
    width_shift_range = 0.2,
    height_shift_range = 0.2,
    shear_range = 0.2,
    zoom_range = 0.2,
    horizontal_flip = True,
    fill_mode = "nearest"
)

training_generator = training_datagen.flow(
    training_images,
    training_labels,
    batch_size = 64,
)


validation_datagen = ImageDataGenerator(
    rescale = 1/255,
    rotation_range = 45,
    width_shift_range = 0.2,
    height_shift_range = 0.2,
    shear_range = 0.2,
    zoom_range = 0.2,
    horizontal_flip = True,
    fill_mode = "nearest"
)

validation_generator = training_datagen.flow(
    testing_images,
    testing_labels,
    batch_size = 64,
)

model = tf.keras.Sequential([
    keras.layers.Conv2D(16, (3, 3), input_shape = (28, 28, 1), activation = "relu"),
    keras.layers.MaxPooling2D(2, 2),
    keras.layers.Conv2D(32, (3, 3), activation = "relu"),
    keras.layers.MaxPooling2D(2, 2),
    keras.layers.Flatten(),
    keras.layers.Dense(256, activation = "relu"),
    keras.layers.Dropout(0.25),
    keras.layers.Dense(512, activation = "relu"),
    keras.layers.Dropout(0.25),
    keras.layers.Dense(26, activation = "softmax")
])

model.compile(
    loss = "categorical_crossentropy",
    optimizer = RMSprop(lr = 0.001),
    metrics = ["accuracy"]
)

但是,当我运行model.fit()时,出现以下错误,

ValueError: Shapes (None, 1) and (None, 24) are incompatible

将损失函数更改为sparse_categorical_crossentropy后,程序运行正常。

我不明白为什么会这样。

有人能解释一下这些损失函数之间的区别吗?

2 个答案:

答案 0 :(得分:2)

问题是,categorical_crossentropy期望使用一个热编码标签,这意味着,对于每个样本,它都期望一个长度为num_classes的张量,其中第label个元素被设置为1其他都为0。

另一方面,sparse_categorical_crossentropy直接使用整数标签(因为这里的用例有很多类,因此单编码的标签会浪费很多零的内存)。我相信,但我无法证实这一点,categorical_crossentropy比其稀疏的副本运行起来更快。

对于您的情况,我建议使用26个类,使用非稀疏版本,并将标签转换为一键编码,如下所示:

def getData(filename):
    images = []
    labels = []
    with open(filename) as csv_file:
        file = csv.reader(csv_file, delimiter = ",")
        next(file, None)
        
        for row in file:
            label = row[0]
            data = row[1:]
            img = np.array(data).reshape(28,28)
            
            images.append(img)
            labels.append(label)
        
        images = np.array(images).astype("float64")
        labels = np.array(labels).astype("float64")
        
    return images, tf.keras.utils.to_categorical(labels, num_classes=26) # you can omit num_classes to have it computed from the data

旁注:除非您有理由对图像使用float64,否则我将切换为float32(它将数据集所需的内存减半,并且该模型可能会将其转换为{{1 }}作为第一个操作)

答案 1 :(得分:0)

简单,对于输出类别为整数sparse_categorical_crosentropy的分类问题,对于那些将标签转换为一个热编码标签的分类问题,我们使用categorical_crosentropy。