度量和损失函数keras

时间:2019-11-01 16:18:04

标签: python tensorflow keras neural-network conv-neural-network

代码-

def define_model():
    # channel 1

    inputs1 = Input(shape=(32,1))
    conv1 = Conv1D(filters=256, kernel_size=2, activation='relu')(inputs1)
    #bat1 = BatchNormalization(momentum=0.9)(conv1)
    pool1 = MaxPooling1D(pool_size=2)(conv1)
    flat1 = Flatten()(pool1)

    # channel 2

    inputs2 = Input(shape=(32,1))
    conv2 = Conv1D(filters=256, kernel_size=4, activation='relu')(inputs2)

    pool2 = MaxPooling1D(pool_size=2)(conv2)

    flat2 = Flatten()(pool2)

    # channel 3

    inputs3 = Input(shape=(32,1))
    conv3 = Conv1D(filters=256, kernel_size=4, activation='relu')(inputs3)
    pool3 = MaxPooling1D(pool_size=2)(conv3)
    flat3 = Flatten()(pool3)

    # channel 4

    inputs4 = Input(shape=(32,1))
    conv4 = Conv1D(filters=256, kernel_size=6, activation='relu')(inputs4)

    pool4 = MaxPooling1D(pool_size=2)(conv4)

    flat4 = Flatten()(pool4)

    # merge

    merged = concatenate([flat1, flat2, flat3, flat4])

    # interpretation
    dense1 = Dense(128, activation='relu')(merged)
    dense2 = Dense(96, activation='relu')(dense1)

    outputs = Dense(10, activation='softmax')(dense2)
    model = Model(inputs=[inputs1, inputs2, inputs3, inputs4 ], outputs=outputs)
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[categorical_accuracy])
plot_model(model, show_shapes=True, to_file='/content/q.png')

    return model

model_concat = define_model()

# fit model
print()
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=2,factor=0.001,min_delta=0.01)
check=ModelCheckpoint(filepath=r'/content/drive/My Drive/Colab Notebooks/gen/concatcnn.hdf5', verbose=1, save_best_only = True)


History = model_concat.fit([X_train, X_train, X_train, X_train], y_train , epochs=20, verbose = 1 ,validation_data=([X_test, X_test, X_test, X_test], y_test), callbacks = [check, red_lr], batch_size = 32)

model_concat.summary

不幸的是,我使用二进制交叉熵作为损失,使用“准确性”作为度量。我获得了val_accuracy的90%以上。

然后,我找到了此链接Keras binary_crossentropy vs categorical_crossentropy performance?

阅读第一个答案后,我使用二进制交叉熵作为损失,使用分类交叉熵作为度量...

即使我更改了此设置,val_acc也没有改善,它显示了约62%。该怎么办...

我最小化了模型的复杂性以学习数据,但准确性并未提高。我有什么想念的吗?

数据集形状,x_train为(800,32)y_train为(200,32)y_train为(800,10),y_test为(2​​00,10)。在馈入网络之前,我在x。中使用标准标量,并将x_train和x_test的形状更改为(800,32,1)和(200,32,1)。

谢谢

0 个答案:

没有答案