在张量流中使用梯度下降优化器时获取变量的“nan”

时间:2017-05-19 20:36:38

标签: tensorflow

我正在尝试做一些非常类似于"入门"张量流主页上的教程。但是,在使用教程中使用的梯度下降训练器时,我的变量仍然是nan

任何人都可以帮我找出原因吗?

import tensorflow as tf
import random

def generate_data(sample_count, slope, intercept, epsilon, min_x, max_x):
    xs = [random.uniform(min_x, max_x) for _ in range(sample_count)]
    ys = [slope * x + intercept + random.uniform(-epsilon, epsilon) for x in xs]
    return xs, ys

# Create Data
sample_count = 1000
slope = 3
intercept = 0 
epsilon = 20
min_x = 0
max_x = 100

xs, ys = generate_data(sample_count, slope, intercept, epsilon, min_x, max_x)

# Linear Model
initial_m = 1.
initial_b = 0.

x = tf.placeholder(tf.float32)
m = tf.Variable(initial_m, tf.float32)
b = tf.Variable(initial_b, tf.float32)
linear_model = m * x + b

# Loss Function
y = tf.placeholder(tf.float32)
loss = tf.reduce_sum(tf.square(y - linear_model))

# Train Model
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
training_iterations = 100

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for _ in range(training_iterations):
        sess.run(train, {x: xs, y: ys})

    results = sess.run([m, b])
    print('true m: {} b: {}'.format(slope, intercept))
    print('optimized m: {} b: {}'.format(results[0], results[1]))

1 个答案:

答案 0 :(得分:5)

您应该使用reduce_mean代替reduce_sum(1),和/或降低学习率。

(1)他们称之为" 意味着平方错误"