在图像上回归以预测标量

时间:2019-05-23 20:10:50

标签: tensorflow keras regression conv-neural-network non-linear-regression

给出256x256 rgb输入图像,我试图回归以预测图像X轴上的点(0-48000)

最初,我尝试了[mobile_net-> GlobalAveragePooling2D->多个密集层]。我没意识到Pooling正在丢弃空间信息。

昨晚,我在一个简单的网络上进行了训练,损失整夜减少了,但它预测为负值。

如何修改此架构以预测0-48000标量?

    model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu', input_shape=(256,256,3)),
    tf.keras.layers.Dropout(0.5),      
    tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu'),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(1,   kernel_initializer='normal'),
])
model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape']) #

编辑

从我的netwrok推断,每次运行时,对于SAME文件,我都获得了截然不同的输出。那怎么可能?

推断输出,在同一文件上运行多次:

-312864.9444580078
762.7029418945312
193352.7603149414

这是推论fn:

def infer(checkpoint_path):
    png_file  = ['3023_28338_26_m.png', '3023_28338_26_m.png'][1]
    test_file = data_root + png_file
    onset     = png_file.strip('_m.png.').split('_')[1]
    img       = load_and_preprocess_from_path_label(test_file, 0)
    tst       = np.expand_dims(img[0], axis=0)
    model     = load_model_and_checkpoint(checkpoint_path)
    val       = model.predict(tst)[0][0] * 48000

这是培训的最后纪元。

2019-05-26 11:11:56.698907: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:150] Shuffle buffer filled.
94/95 [============================>.] - ETA: 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0627 - mape: 93.2817   
Epoch 00100: saving model to /media/caseybasichis/sp_data/sp_data/datasets/one_sec_onset_01/model7.ckpt
95/95 [==============================] - 47s 500ms/step - loss: 0.0063 - mse: 0.0063 - mae: 0.0626 - mape: 93.2076

这是最新的网络。

mobile_net = tf.keras.applications.ResNet50(input_shape=(256, 256, 3), include_top=False, weights='imagenet')
mobile_net.trainable=False

model = tf.keras.Sequential([
    mobile_net,
    tf.keras.layers.Dropout(0.25),  
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512, kernel_initializer='normal', activation='relu'),
    tf.keras.layers.BatchNormalization(axis=chanDim),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(1,  kernel_initializer='normal', activation='linear'), # activation='sigmoid'
])
model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape']) # mean_squared_logarithmic_error

1 个答案:

答案 0 :(得分:2)

您可以在最后一层简单地使用Sigmoid激活,然后将输出乘以缩放比例(在Lambda层中,或者最好将输出缩放到网络之外)

model.add(Activation('sigmoid'))
model.add(Lambda(lambda x: 48000*x))

model.add(Activation('sigmoid'))
...
model.fit(x_train, y_train/48000.0)