ValueError:检查目标时出错:预期density_10具有形状(1,)但具有形状的数组(19316,)

时间:2019-07-19 15:49:49

标签: python tensorflow keras deep-learning

我正在运行一个CNN,用于检查图像,但不进行分类。实际上,输出层是一个密集层,具有1d标签中的图像大小作为参数。

如下代码所示,我使用的是model.fit_generator()而不是model.fit,在开始训练模型时会出现以下错误:

ValueError: Error when checking target: expected dense_10 to have shape 
(1,) but got array with shape (19316,)

为什么这是一个错误?我的密集输出是19316个元素的数组,为什么会期望它的形状为(1,)?

以下是该模型的摘要:


图层(类型)输出形状参数#

conv2d_28(Conv2D)(无,26、877、32)544


activation_37(激活)(无,26、877、32)0


max_pooling2d_28(MaxPooling(None,13,438,32)0


conv2d_29(Conv2D)(无,12、437、16)2064


activation_38(激活)(无,12、437、16)0


max_pooling2d_29(MaxPooling(None,6,218,16)0


conv2d_30(Conv2D)(无,5、217、8)520


activation_39(激活)(无,5、217、8)0


max_pooling2d_30(MaxPooling(None,2,108,8)0


activation_40(激活)(无,2、108、8)0


flatten_10(Flatten)(无,1728)0


dropout_10(退出)(无,1728)0


dense_10(密集)(无,19316年)33397364

================================================ ==================

总参数:33,400,492 可训练的参数:33,400,492 不可训练的参数:0


有什么建议吗?

非常感谢!

def generator(data_arr, batch_size = 10):

num = len(data_arr) 

if num % batch_size != 0 : 
    num = int(num/batch_size)

# Loop forever so the generator never terminates
while True: 

for offset in range(0, num, batch_size):

    batch_samples = (data_arr[offset:offset+batch_size])

    samples = []
    labels = []

    for batch_sample in batch_samples:

        samples.append(batch_sample[0])
        labels.append((np.array(batch_sample[1].flatten)).transpose())

    X_ = np.array(samples)
    Y_ = np.array(labels)

    X_ = X_[:, :, :, newaxis]

    print(X_.shape)
    print(Y_.shape)

    yield (X_, Y_)

# compile and train the model using the generator function
train_generator = generator(training_data, batch_size = 10)
validation_generator = generator(val_data, batch_size = 10)

run_opts = tf.RunOptions(report_tensor_allocations_upon_oom = True)

model = Sequential()

model.add(Conv2D(32, (4, 4), strides=(2, 2), input_shape = (55, 1756, 
1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))

model.add(Conv2D(16, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))

model.add(Conv2D(8, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))

model.add(Activation('softmax'))
model.add(Flatten())  # this converts our 3D feature maps to 1D feature 
vectors
model.add(Dropout(0.3))
model.add(Dense(19316))

model.compile(loss = 'sparse_categorical_crossentropy',
              optimizer = 'adam',
              metrics = ['accuracy'],
              options = run_opts)

model.summary()

batch_size = 20
nb_epoch = 6

model.fit_generator(train_generator, 
                    steps_per_epoch = len(training_data) ,
                    epochs = nb_epoch,
                    validation_data = validation_generator,
                    validation_steps = len(val_data))

1 个答案:

答案 0 :(得分:0)

您当前正在使用sparse_categorical_crossentropy丢失,它需要整数标签并在内部进行一键编码,但是您的标签已被一键编码。

因此,在这种情况下,您应该恢复为categorical_crossentropy损失。