张量流训练感知器中的纳米成本

时间:2016-09-13 13:02:37

标签: python tensorflow

我正在尝试在张量流中训练单层感知器(基于this上的代码)在以下数据文件中:

1,1,0.05,-1.05
1,1,0.1,-1.1
....

其中最后一列是标签(3个参数的功能),前三列是函数参数。读取数据并训练模型的代码(为了便于阅读,我将其简化):

import tensorflow as tf

... # some basics to read the data
example, label = read_file_format(filename_queue)
... # model construction and parameter setting
n_hidden_1 = 4 # 1st layer number of features
n_input = 3
n_output = 1
...

# calls a function which produces a prediction
pred = multilayer_perceptron(x, weights, biases)

# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.initialize_all_variables()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(training_epochs):
        _, c = sess.run([optimizer, cost], feed_dict={x: example.reshape(1,3), y: label.reshape(-1,1)})
        # Display logs per epoch step
        if epoch % display_step == 0:
            print("Epoch:", '%04d' % (epoch+1), "Cost:",c)

但是当我运行它时,似乎有些错误:

('Epoch:', '0001', 'Cost:', nan)
('Epoch:', '0002', 'Cost:', nan)
....
('Epoch:', '0015', 'Cost:', nan)

这是multilaye_perceptron函数等的完整代码:

# Parameters
learning_rate = 0.001
training_epochs = 15
display_step = 1

# Network Parameters
n_hidden_1 = 4 # 1st layer number of features
n_input = 3 
n_output = 1 

# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_output])

# Create model
def multilayer_perceptron(x, weights, biases):
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.relu(layer_1)
    # Output layer with linear activation
    out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
    return out_layer

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_hidden_1, n_output]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_output]))
}

1 个答案:

答案 0 :(得分:4)

这是一次一个例子吗?我会批量生产并将批量大小增加到128或类似,只要你得到nans。

当我得到nans时,它通常是三个中的任何一个: - 批量太小(在你的情况下只有1) - 某处记录(0) - 学习率太高,没有上限的渐变