如何使用张量流的权重制作MLP

时间:2018-10-26 07:46:28

标签: python tensorflow

我在张量流中构建神经网络并对其进行了训练。 我从估算器

中提取了权重和偏差
weights1 = self.model.get_variable_value('dense/kernel')
bias1 = self.model.get_variable_value('dense/bias')
weights2 = self.model.get_variable_value('dense_1/kernel')
bias1 = self.model.get_variable_value('dense_1/bias')
...

然后我用python中的numpy构建MLP

layer1 = np.dot(inputs, weight1)
layer1 = np.add(layer1, bias1)
layer1 = np.maximum(layer1, layer1 * 0.2, layer1)
...

我使用了leaky_relu激活函数,所以我也实现了它,但是输出与张量流完全不同。我不知道这是怎么回事。

编辑)

def my_dnn_regression_fn(features, labels, mode, params):
    top = tf.feature_column.input_layer(features, params["feature_columns"])

    for units in params.get("hidden_units", [20]):
        top = tf.layers.dense(inputs=top, units=units, activation=tf.nn.leaky_relu)

    output_layer = tf.layers.dense(inputs=top, units=1)

    predictions = tf.squeeze(output_layer, 1)

    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(
            mode=mode, predictions={"label": predictions})

    average_loss = tf.losses.mean_squared_error(labels, predictions)

    batch_size = tf.shape(labels)[0]
    total_loss = tf.to_float(batch_size) * average_loss

    if mode == tf.estimator.ModeKeys.TRAIN:
        mse = tf.metrics.mean_squared_error(labels, predictions)
        rmse = tf.metrics.root_mean_squared_error(labels, predictions)
        absolute_error = tf.metrics.mean_absolute_error(labels, predictions)
        mre = tf.metrics.mean_relative_error(labels, predictions, labels)

        tf.summary.scalar('mse', mse[1])
        tf.summary.scalar('mre', mre[1])
        tf.summary.scalar('rmse', rmse[1])
        tf.summary.scalar('absolute', absolute_error[1])

        # vars = tf.trainable_variables()
        # lossL2 = tf.add_n([tf.nn.l2_loss(v) for v in vars]) * 0.001

        l1_regularizer = tf.contrib.layers.l1_regularizer(
            scale=0.001, scope=None
        )
        weights = tf.trainable_variables()  # all vars of your graph
        lossL1 = tf.contrib.layers.apply_regularization(l1_regularizer, weights)

        # average_loss = tf.add(average_loss, lossL2)
        average_loss = tf.add(average_loss, lossL1)

        total_loss = tf.to_float(batch_size) * average_loss

        optimizer = params.get("optimizer", tf.train.AdamOptimizer)
        optimizer = optimizer(params.get("learning_rate", None))
        train_op = optimizer.minimize(
            loss=average_loss, global_step=tf.train.get_global_step())
        # eval_metrics = {"rmse": rmse, "absolute": absolute_error, "mre": mre}
        eval_metrics = {"mse": mse, "rmse": rmse, "absolute": absolute_error, "mre": mre}
        return tf.estimator.EstimatorSpec(
            mode=mode, loss=total_loss, train_op=train_op, eval_metric_ops=eval_metrics)

    assert mode == tf.estimator.ModeKeys.EVAL

    mse = tf.metrics.mean_squared_error(labels, predictions)
    rmse = tf.metrics.root_mean_squared_error(labels, predictions)
    absolute_error = tf.metrics.mean_absolute_error(labels, predictions)
    mre = tf.metrics.mean_relative_error(labels, predictions, labels)

    eval_metrics = {"mse": mse, "rmse": rmse, "absolute": absolute_error, "mre": mre}

    return tf.estimator.EstimatorSpec(
        mode=mode,
        loss=total_loss,
        eval_metric_ops=eval_metrics)

我的dnn优惠代码!!

0 个答案:

没有答案