如何使用图中的旧值用新值更新张量?

时间:2019-07-01 07:48:09

标签: python python-3.x tensorflow tensor

我对tensorflow很陌生。我想使用张量的旧值来计算新值。 tf.assign仅适用于tf.Variables。我不确定如何实现张量操作。

以下代码不是实际的代码段,但想法是相同的。

data.csv
inp1            inp2
288.15          288.15
289.87912       303.10137
291.60825       318.05275
292.90509       329.26628
294.20194       340.47981
295.75815       353.93605
297.31436       367.39229
298.87057       380.84852
300.42679       394.30476
301.983         407.761

import tensorflow as tf
import pandas as pd

inp1 = tf.placeholder("float", [None, 1],name="inp1")
inp2 = tf.placeholder("float", [None, 1],name="inp2")


# PREVop means previous value of op i.e. op [i-1]

dummy_op = tf.add(inp1, 10)

op = tf.Variable(dummy_op,validate_shape=False,dtype=tf.float32)


# op[i] = (op[i-1]*inp2[i]) + inp1[i]
op = tf.add(tf.multiply(PREVop, inp2), inp1) 

label = tf.placeholder("float", [None,1],name="label")


learning_rate = 1e-2
loss_op = tf.losses.absolute_difference(label, op)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, epsilon=0.1)
train_op = optimizer.minimize(loss_op)

inp = pd.read_csv("data.csv")
batch_size = 20
training_steps = 100

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print ("Training starts.......")
    for step in range(training_steps):
        avg_cost = 0.
        total_batch = len(inp)//batch_size
        for i in range(total_batch):
            idx = np.arange(len(inp))
            np.random.shuffle(idx)
            idx = idx[i*batch_size:(i+1)*batch_size]
            _, c = sess.run([train_op, loss_op], feed_dict={
            inp1: input1.values[idx] #pandas
            inp2: input2.values[idx] #pandas
            label: target.values[idx], #pandas
            })   


在培训期间,我希望op = tf.add(tf.multiply(PREVop, inp2), inp1)对每个样本使用op的先前值。

任何建议将不胜感激。

1 个答案:

答案 0 :(得分:1)

由于op的值总是在变化,因此您可以在每次迭代后使用tf.Variable()来存储其值。在这里,tf.Variable()在开始时用零张量初始化。

import tensorflow as tf
import numpy as np

inp1 = tf.placeholder(tf.float32, [None, 1], name="inp1")
inp2 = tf.placeholder(tf.float32, [None, 1], name="inp2")

PREVop = tf.Variable(tf.zeros([2, 1]), tf.float32)

out = tf.add(tf.multiply(PREVop, inp2), inp1) 
PREVop = tf.assign(PREVop, out)  

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for i in range(3):
        res, var = sess.run([out, PREVop], feed_dict={inp1:np.random.rand(2, 1), inp2:np.random.rand(2, 1)})
        print('out operation result: \n{}'.format(res))
        print('PREVop value after assigning: \n{}'.format(var))
        print(20*'-')

输出:

out operation result: 
[[0.86163723]
 [0.7938016 ]]
PREVop value after assigning: 
[[0.86163723]
 [0.7938016 ]]
--------------------
out operation result: 
[[0.5666107]
 [0.9492748]]
PREVop value after assigning: 
[[0.5666107]
 [0.9492748]]
--------------------
out operation result: 
[[0.89638215]
 [0.93310213]]
PREVop value after assigning: 
[[0.89638215]
 [0.93310213]]
--------------------

更新:因此,您想用PREVop初始化tf.add(inp1, 10),然后用op的值tf.add(tf.multiply(PREVop, inp2), inp1)更新它。我正在添加一种方法来执行此操作,但说实话,我不喜欢它。

代码:

batch_size = 2

inp1 = tf.placeholder(tf.float32, [None, 1],name="inp1")
inp2 = tf.placeholder(tf.float32, [None, 1],name="inp2")

inp3 = tf.placeholder(tf.float32, [None, 1], name="inp3")
PREVop = tf.Variable(tf.zeros([batch_size, 1]), dtype=tf.float32)
PREVop = tf.assign(PREVop, inp3)

out = tf.add(tf.multiply(PREVop, inp2), inp1)  

inp = pd.read_csv("data.csv", sep=' ')
x_train = inp.iloc[:,:-1]
training_steps = 100

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    out_res = 0
    for step in range(training_steps):
        total_batch = len(inp)//batch_size
        for i in range(total_batch):
            batch_x = x_train[i*batch_size:min((i+1)*batch_size, len(inp))]

            if step==0 and i==0:
                _, res = sess.run([PREVop, out], feed_dict={inp1: batch_x['inp1'].values.reshape(2, 1), 
                                                            inp2: batch_x['inp2'].values.reshape(2, 1), 
                                                            inp3: batch_x['inp1'].values.reshape(2, 1)+10})   
            else:
                _, res = sess.run([PREVop, out], feed_dict={inp1: batch_x['inp1'].values.reshape(2, 1), 
                                                            inp2: batch_x['inp2'].values.reshape(2, 1), 
                                                            inp3: out_res})
            out_res = res

在上面的代码中,batch_size=2和我正在使用占位符inp3,并向其提供值inp1+10,该值随后分配给变量PREVop。这种情况一开始只发生过一次,后来在PREVop上分配了out的值(在下面的代码中)。