Tensorflow-如果主要是更新偏差,那怎么办?

时间:2018-08-22 12:52:19

标签: tensorflow

我使用来自Tensorflow-Hub的预训练网络,并将结果向量传递到2个完全连接的层中。我用He初始化初始化权重矩阵,并用0初始化偏差。

损失函数表现异常。它还确实在某种程度上更新了权重矩阵,但主要是偏差。

有人知道如何改善学习吗?

谢谢!

Correct classification and loss

Histogram of weights

with tf.name_scope('tf_hub'):
    module = hub.Module("https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/2")
    tf_hub_features = module(X)  # Features with shape [batch_size, num_features].

he_initializer = tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_IN', uniform=False)

with tf.name_scope('Hidden1'):
    W1 = tf.get_variable(initializer=he_initializer, shape=[Constants.PNAS_NET2_NB_FEATURES, config["h1_nb_units"]],
                         name="W1")
    # W1 = tf.Variable(tf.random_normal([Constants.PNAS_NET2_NB_FEATURES, config["h1_nb_units"]]), name="W1")
    tf.summary.histogram("W1", W1)
    b1 = tf.Variable(tf.zeros([config["h1_nb_units"]]), name="b1")
    tf.summary.histogram("b1", b1)
    o1 = tf.nn.relu(tf.matmul(tf_hub_features, W1) + b1, name="o1")
    # dropout1 = tf.layers.dropout(inputs=o1, rate=config["keep_probability"], name="dropout1")

with tf.name_scope('Hidden2'):
    W2 = tf.get_variable(initializer=he_initializer, shape=[config["h1_nb_units"], config["h2_nb_units"]],
                         name="W2")
    tf.summary.histogram("W2", W2)
    b2 = tf.Variable(tf.zeros([config["h2_nb_units"]]), name="b2")
    tf.summary.histogram("b2", b2)
    o2 = tf.nn.relu(tf.matmul(o1, W2) + b2, name="o2")

with tf.name_scope('Y'):
    WY = tf.get_variable(initializer=he_initializer, shape=[config["h2_nb_units"], config["output_dim"]],
                         name="WY")
    tf.summary.histogram("WY", WY)
    bY = tf.Variable(tf.zeros([config["output_dim"]]), name="bY")
    tf.summary.histogram("bY", bY)
    Y_star = tf.add(tf.matmul(o2, WY), bY, name="Y_star")
    Y = tf.nn.sigmoid(Y_star, name="Y")

with tf.name_scope('loss'):
    Y_ = tf.placeholder(tf.float32, shape=(None, 1), name="Y_")
    loss = tf.losses.log_loss(Y_, Y_hat)

optimizer = tf.train.AdamOptimizer(config["learning_rate"])
train_step = optimizer.minimize(loss)

1 个答案:

答案 0 :(得分:0)

答案很简单。输入输入时出现错误。它们都是零和一些。因此,权重只有很小的变化。我猜想偏差已经过调整,因为它将学习线性回归中的“均值”之类的东西。