如何禁止tensorflow自动打印出张量值?

时间:2018-09-15 08:59:52

标签: tensorflow

我是tensorflow的新手。

我正在尝试使用参数更新方程式编写一个非常简单的渐变更新例程。例如tf.assign(b,b-alpha * dL_db)

但是,只要在session.run()期间调用此操作,它都会保持输出张量内容,有没有办法我可以禁止这种中间输出?谢谢

代码如下:

alpha = 0.001 # step size coefficient
eps = 0.10000 # controls convergence criterion
n_epoch = 1000 # number of epochs (full passes through the dataset)

# begin simulation
# set X (training data) and y (target variable)
cols = data.shape[1]  

# convert from data frames to numpy matrices
Xin = np.array(data.iloc[:,0:cols-1].values)  
yin = np.array(data.iloc[:,cols-1:cols].values)

#TODO convert np array to tensor objects
X = tf.placeholder(dtype=tf.float32, shape=[None, 1])
y = tf.placeholder(dtype=tf.float32, shape=[None, 1])
m = tf.placeholder(dtype=tf.float32)

#TODO create an placeholder variable for X(input) and Y(output)
# convert to numpy arrays and initalize the parameter array theta 
b = tf.Variable(tf.zeros((1)))
w = tf.Variable(tf.zeros((1,X.shape[1])))
theta = (b,w)
regress = theta[0]+theta[1]*X
computeCost = tf.matmul(tf.transpose(regress-y),regress-y)[0][0]/m
dL_dw = tf.matmul(tf.transpose(regress-y),X)[0][0]/m
dL_db = tf.reduce_sum(regress-y)/m
nabla = (dL_db, dL_dw) # nabla represents the full gradient

#print("-1 L = {0}".format(tf.Session().run(computeCost,feed_dict={X:Xin,y:yin,m:Xin.shape[0]})))
#L_best = L
i = 0
cost = [] # you can use this list variable to help you create the loss versus epoch plot at the end (if you want)

b_training = tf.assign(b, b-alpha*dL_db)
w_training = tf.assign(w, w-alpha*dL_dw)

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    while(i < n_epoch):
        sess.run(b_training,feed_dict={X:Xin,y:yin,m:Xin.shape[0]})
        sess.run(w_training,feed_dict={X:Xin,y:yin,m:Xin.shape[0]})
        L = sess.run(computeCost,feed_dict={X:Xin,y:yin,m:Xin.shape[0]})
        cost.append(L)
        print(" {0} L = {1}".format(i,L))
        i += 1
    best_b = b.eval()
    best_w = w.eval()

0 个答案:

没有答案