Tensorflow多层感知器图不会收敛

时间:2016-11-28 06:53:28

标签: python tensorflow

我是python和tensorflow的新手。更好(也许)了解DNN及其数学。我开始学习通过练习来使用张量流。

我的一个练习是预测x ^ 2。这意味着经过精细培训。当我给5.0时,它会预测25.0。

参数和设置:

成本函数= E((y-y')^ 2)

两个隐藏的图层,它们完全连接。

learning_rate = 0.001

n_hidden_​​1 = 3

n_hidden_​​2 = 2

n_input = 1

n_output = 1

def multilayer_perceptron(x, weights, biases):
    # Hidden layer with RELU activation
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.relu(layer_1)
    # Hidden layer with RELU activation
    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
    layer_2 = tf.nn.relu(layer_2)
    # Output layer with linear activation
    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

def generate_input():
    import random

    val = random.uniform(-10000, 10000)
    return np.array([val]).reshape(1, -1), np.array([val*val]).reshape(1, -1)


# tf Graph input
# given one value and output one value
x = tf.placeholder("float", [None, 1])
y = tf.placeholder("float", [None, 1])
pred = multilayer_perceptron(x, weights, biases)

# Define loss and optimizer
distance = tf.sub(pred, y)
cost = tf.reduce_mean(tf.pow(distance, 2))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

init = tf.initialize_all_variables()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    avg_cost = 0.0

    for iter in range(10000):
        inp, ans = generate_input()
        _, c = sess.run([optimizer, cost], feed_dict={x: inp, y: ans})
        print('iter: '+str(iter)+' cost='+str(c))

然而,事实证明c有时变大,有时变低。 (但很大)

1 个答案:

答案 0 :(得分:2)

由于声明val = random.uniform(-10000, 10000),您的训练数据似乎范围很大,因此请在训练前尝试进行一些数据预处理。例如,

val = random.uniform(-10000, 10000)
val = np.asarray(val).reshape(1, -1)
val -= np.mean(val, axis=0)
val /= np.std(val, axis=0)

至于损失价值,有时它变得更大,有时更低,只是确保在训练时代增加时损失正在减少。 PS:我们经常使用SGD优化器。