我试图理解张量板如何使图形可视化。为此目的,我使用简单的线性回归。这是我的代码:
# LINEAR REGRESSION IN TENSORFLOW
# generate points
import numpy as np
import os
import time
import tensorflow as tf
num_points = 1000
vectors_set = []
for i in xrange(num_points):
x1 = np.random.normal(0.0, 0.55)
y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
vectors_set.append([x1, y1])
with tf.name_scope('data') as scope:
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
# Cost function and gradient descent algorithm
with tf.name_scope('model') as scope:
W = tf.Variable(tf.random_uniform([1], -1, 1), name = "W")
b = tf.Variable(tf.zeros([1]), name = "b")
z = tf.add(W * x_data, b, name = "z")
with tf.name_scope('loss') as scope:
loss = tf.reduce_mean(tf.square(z - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# Running the algorithm
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
timestamp = str(int(time.time()))
print timestamp
train_summary_writer = tf.train.SummaryWriter(
os.path.join(
"./", "summaries", timestamp), sess.graph)
train_summary_writer.add_graph(sess.graph)
我的问题是:
非常感谢!
答案 0 :(得分:4)
当您创建tf.train.GradientDescentOptimizer
您的代码指定GradientDescentOptimizer应尽量减少损失,这意味着它取决于丢失。此外,为了最大限度地减少损失,需要更新模型中的权重。
我不确定;你可以上传图表定义吗? (您可以从会话中获取图表def。)
当我们添加张量形状时,我们已禁用箭头,但很多人都要求它们,所以我们会把它放回去。
with tf.name_scope('data') as scope:
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
name_scope没有做任何事情,因为你没有在那里创建任何tensorflow操作,只是声明Python列表。相反,您应该考虑使用占位符。