具有S形激活的多层感知器在sin(2x)回归上产生直线

时间:2018-02-19 11:20:17

标签: python tensorflow machine-learning deep-learning

我尝试使用多层感知器来测量sin(2x)函数中的噪声数据:

# Get data
datasets = gen_datasets()
# Add noise
datasets["ysin_train"] = add_noise(datasets["ysin_train"])
datasets["ysin_test"] = add_noise(datasets["ysin_test"])
# Extract wanted data
patterns_train = datasets["x_train"]
targets_train = datasets["ysin_train"]
patterns_test = datasets["x_test"]
targets_test = datasets["ysin_test"]
# Reshape to fit model
patterns_train = patterns_train.reshape(62, 1)
targets_train = targets_train.reshape(62, 1)
patterns_test = patterns_test.reshape(62, 1)
targets_test = targets_test.reshape(62, 1)

# Parameters
learning_rate = 0.001
training_epochs = 10000
batch_size = patterns_train.shape[0]
display_step = 1

# Network Parameters
n_hidden_1 = 2
n_hidden_2 = 2
n_input = 1
n_classes = 1

# tf Graph input
X = tf.placeholder("float", [None, n_input])
Y = tf.placeholder("float", [None, n_classes])

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

# Create model
def multilayer_perceptron(x):
    # Hidden fully connected layer with 2 neurons
    layer_1 = tf.sigmoid(tf.add(tf.matmul(x, weights['h1']), biases['b1']))
    # Hidden fully connected layer with 2 neurons
    layer_2 = tf.sigmoid(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']))
    # Output fully connected layer
    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

# Construct model
logits = multilayer_perceptron(X)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.losses.absolute_difference(labels = Y, predictions = logits, reduction=tf.losses.Reduction.NONE))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)

# Initializing the variables
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)

    # Training Cycle
    for epoch in range(training_epochs):

        _ = sess.run(train_op, feed_dict={X: patterns_train,
                                          Y: targets_train})
        c = sess.run(loss_op, feed_dict={X: patterns_test,
                                         Y: targets_test})
        if epoch % display_step == 0:
            print("Epoch: {0: 4} cost={1:9}".format(epoch+1, c))
    print("Optimization finished!")
    outputs = sess.run(logits, feed_dict={X: patterns_test})
    print("outputs: {0}".format(outputs.T))
    plt.plot(patterns_test, outputs, "r.", label="outputs")
    plt.plot(patterns_test, targets_test, "b.", label="targets")
    plt.legend()
    plt.show()

当我在最后绘制这个时,我得到一条直线,好像我有一个线性网络。看一下情节:

enter image description here

这是线性网络错误的正确最小化。但我不应该进行线性改进,因为我在multilayer_perceptron()函数中使用了sigmoid函数!为什么我的网络表现得像这样?

1 个答案:

答案 0 :(得分:1)

tf.random_normal中的stddev=1.0的默认值,用于权重&偏差初始化,巨大。为权重尝试显式值stddev=0.01;至于偏见,通常的做法是将它们初始化为零。

作为初步方法,我还会尝试更高的learning_rate 0.01(或者可能不是 - 在相关问题中查看答案here