在张量流中累积梯度

时间:2019-06-04 05:57:37

标签: tensorflow optimization gradient adam batchsize

对于没有一定gpu内存功能并且示例很大的人,最大可能批处理为1。因此,由于这个原因,我一直在寻找累积梯度,然后应用训练操作。 (即使用一的batchsize,然后分别添加渐变)

我发现了类似的问题,但我一点都不信任。我已经使用https://github.com/ahmdtaha/FineGrainedVisualRecognition/wiki/Accumulated-Gradient-in-Tensorflow

的累积技术对这堆代码进行了一些实验
import tensorflow as tf 
import numpy as np 
from sklearn.datasets import load_boston

def read_infile():
    data = load_boston()
    features = np.array(data.data)
    target = np.array(data.target)
    return features, target

def feature_normalize(data):
    mu = np.mean(data, axis=0)
    std = np.std(data, axis=0)
    return (data - mu)/std

def append_bias(features, target):
    n_samples = features.shape[0]
    n_features = features.shape[1]
    intercept_feature = np.ones((n_samples, 1))
    X = np.concatenate((features, intercept_feature), axis=1)
    X = np.reshape(X, [n_samples, n_features+1])
    Y = np.reshape(target, [n_samples, 1])
    return X,Y

features, target = read_infile()
z_features = feature_normalize(features)
X_input, Y_input = append_bias(z_features, target)
num_features = X_input.shape[1]

X = tf.placeholder(tf.float32, shape=[None, num_features])
Y = tf.placeholder(tf.float32, shape=[None, 1])

w = tf.Variable(tf.random_normal((num_features, 1)), name="weights")

learning_rate = 0.01
num_epochs = 200
cost_trace = []
mini_batch = 20 #batchsize
pred = tf.matmul(X, w)
error = pred - Y
cost = tf.reduce_mean(tf.square(error))

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)


trainable_vars = [w]
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in trainable_vars]
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)

with tf.control_dependencies(update_ops):
    optimizer = tf.train.AdamOptimizer(learning_rate)
    grads = optimizer.compute_gradients(cost, trainable_vars)
    # Adds to each element from the list you initialized earlier with zeros its gradient 
    ##(works because accum_vars and gvs are in the same order)
    accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(grads)]
    # Define the training step (part with variable value update)
    train_op = optimizer.apply_gradients([(accum_vars[i] / float(mini_batch), gv[1]) for i, gv in enumerate(grads)])

init=tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
sess.as_default()
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
print(update_ops)
batches = int(X_input.shape[0]/mini_batch)
for i in range(num_epochs):
    sess.run(zero_ops)
    for b in range(batches):
        acum = sess.run(accum_ops, feed_dict={X: X_input[b:mini_batch+b, :], Y: Y_input[b:mini_batch+b, :]})

    sess.run(train_op, feed_dict={X: X_input[b:mini_batch+b, :], Y: Y_input[b:mini_batch+b, :]})

    csum = 0.0
    for b in range(batches):
        c= sess.run(cost, feed_dict={X: X_input[b:mini_batch+b, :], Y: Y_input[b:mini_batch+b, :]})
        csum = csum + c
    cost_trace.append(c/float(batches))

error = sess.run(error, feed_dict={X: X_input, Y: Y_input})
print("MSE TRAINING: " + str(cost_trace[-1]))

import matplotlib.pyplot as plt
plt.plot(cost_trace)
plt.show()

我期望在调整批处理大小方面有所不同,但并不明显。 实际上,当我使用批处理大小为1的批处理时,收敛性要好于较大的批处理。还有一些问题。 当运行sess.run(train_op,feed_dict)时也再次运行accum_ops,并重复最后一个示例? 成本函数的这种行为正确吗? 当我不想被执行时,如何调试图的某个节点是否被执行? 这种累积梯度的方法正确吗?

随时进行更改批次大小和学习率的实验,以了解我在说什么。

0 个答案:

没有答案