神经网络的预测和测试损失小于0.001,但进行预测时的准确度为0%

时间:2018-12-20 11:20:18

标签: python tensorflow machine-learning keras

我一直在训练MLP,以预测装配序列上剩余的时间。训练损失,验证损失和MSE都小于0.001,但是,当我尝试使用我训练的网络中的一个数据集进行预测时,它无法从输入集合中正确识别任何输出。发生此错误的我在做什么错?

我也在努力理解,当部署模型时,如何对一个预测进行结果缩放? scaler.inverse_transform无法使用,因为在训练过程中使用的那个缩放器的数据已经丢失,因为使用训练生成的模型将在与训练分开的脚本中完成预测。这些信息是否保存在模型构建器中?

我尝试在训练期间更改批次大小,将数据集的时间列舍入到最接近的秒数(以前是0.1秒),经过50、100和200个时期的训练,而我总是最终没有正确的预测。我也在培训LSTM,以查看哪种方法更准确,但也存在相同的问题。将该数据集分为70-30个训练测试,然后将训练分为75-25个训练和验证。

数据缩放和模型训练代码:

def scale_data(training_data, training_data_labels, testing_data, testing_data_labels):
    # Create X and Y scalers between 0 and 1
    x_scaler = MinMaxScaler(feature_range=(0, 1))
    y_scaler = MinMaxScaler(feature_range=(0, 1))

    # Scale training data
    x_scaled_training = x_scaler.fit_transform(training_data)
    y_scaled_training = y_scaler.fit_transform(training_data_labels)

    # Scale testing data
    x_scaled_testing = x_scaler.transform(testing_data)
    y_scaled_testing = y_scaler.transform(testing_data_labels)

    return x_scaled_training, y_scaled_training, x_scaled_testing, y_scaled_testing


def train_model(training_data, training_labels, testing_data, testing_labels, number_of_epochs, number_of_columns):
    model_hidden_neuron_number_list = []
    model_repeat_list = []
    model_error_rate_list = []
    for hidden_layer_1_units in range(int(np.floor(number_of_columns / 2)), int(np.ceil(number_of_columns * 2))):
        print("Training starting, number of hidden units = %d" % hidden_layer_1_units)
        for repeat in range(1, 6):
            print("Repeat %d" % repeat)
            model = k.Sequential()
            model.add(Dense(hidden_layer_1_units, input_dim=number_of_columns,
                        activation='relu', name='hidden_layer_1'))
            model.add(Dense(1, activation='linear', name='output_layer'))
            model.compile(loss='mean_squared_error', optimizer='adam')

            # Train Model
            model.fit(
                training_data,
                training_labels,
                epochs=number_of_epochs,
                shuffle=True,
                verbose=2,
                callbacks=[logger],
                batch_size=1024,
                validation_split=0.25
            )

            # Test Model
            test_error_rate = model.evaluate(testing_data, testing_labels, verbose=0)

            print("Error on testing data is %.3f" % test_error_rate)

            model_hidden_neuron_number_list.append(hidden_layer_1_units)
            model_repeat_list.append(repeat)
            model_error_rate_list.append(test_error_rate)

            # Save Model
            model_builder = tf.saved_model.builder.SavedModelBuilder("MLP/models/{hidden_layer_1_units}/{repeat}".format(hidden_layer_1_units=hidden_layer_1_units, repeat=repeat))

            inputs = {
            'input': tf.saved_model.build_tensor_info(model.input)
            }
            outputs = { 'time_remaining':tf.saved_model.utils.build_tensor_info(model.output)
            }

            signature_def = tf.saved_model.signature_def_utils.build_signature_def(
            inputs=inputs,
            outputs=outputs, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
            )

            model_builder.add_meta_graph_and_variables(
                K.get_session(),
                tags=[tf.saved_model.tag_constants.SERVING],
                signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
                }
            )

        model_builder.save()

然后进行预测:

file_name = top_level_file_path + "./MLP/models/19/1/"
    testing_dataset = pd.read_csv(file_path + os.listdir(file_path)[0])
    number_of_rows = len(testing_dataset.index)
    number_of_columns = len(testing_dataset.columns)
    newcol = [number_of_rows]
    max_time = testing_dataset['Time'].max()

    for j in range(0, number_of_rows - 1):
        newcol.append(max_time - testing_dataset.iloc[j].iloc[number_of_columns - 1])

    x_scaler = MinMaxScaler(feature_range=(0, 1))
    y_scaler = MinMaxScaler(feature_range=(0, 1))

    # Scale training data
    data_scaled = x_scaler.fit_transform(testing_dataset)
    labels = pd.read_csv("Labels.csv")
    labels_scaled = y_scaler.fit_transform(labels)

    signature_key = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
    input_key = 'input'
    output_key = 'time_remaining'


with tf.Session(graph=tf.Graph()) as sess:
    saved_model = tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], file_name)
        signature = saved_model.signature_def

        x_tensor_name = signature[signature_key].inputs[input_key].name
        y_tensor_name = signature[signature_key].outputs[output_key].name

        x = sess.graph.get_tensor_by_name(x_tensor_name)
        y = sess.graph.get_tensor_by_name(y_tensor_name)

        #np.expand_dims(data_scaled[600], axis=0)
        predictions = sess.run(y, {x: data_scaled})
        predictions = y_scaler.inverse_transform(predictions)
        #print(np.round(predictions, 2))

        correct_result = 0
        for i in range(0, number_of_rows):
            correct_result = 0
            print(np.round(predictions[i]), " ", np.round(newcol[i]))
            if np.round(predictions[i]) == np.round(newcol[i]):
                correct_result += 1
        print((correct_result/number_of_rows)*100)

第一行的输出应为96.0,但输出为110.0,最后一行应为0.1,但当数据集中未出现负数时为-40.0。

1 个答案:

答案 0 :(得分:1)

进行回归时无法计算准确性。还要计算测试集的均方误差。

第二,关于缩放器,您总是在训练日期执行scaler.fit_transform,因此缩放器将计算参数(在这种情况下,如果使用min则为minmax -max缩放器)。然后,在对测试集进行推断时,只应在将数据输入模型之前执行scaler.transform