Tensorflow:急切模式下速度降低7倍

时间:2019-01-29 19:43:29

标签: python tensorflow optimization

使用张量流执行简单的最小化任务(为硬S形逼近拟合最佳参数)后,我决定将其从图模式转换为热切模式。我感到惊讶的是,在急切模式下运行需要更长的时间。

这里是密码。

图形模式代码:

import tensorflow as tf
from time import time

beg = time()
a = tf.Variable(-10, name='a', dtype=tf.float32)
b = tf.Variable(10, name='b', dtype=tf.float32)

def g(x):
    return tf.clip_by_value( (x-a)/(b-a), 0, 1)

X = tf.lin_space(-20., 20., 2000)
loss = tf.reduce_sum( tf.square( tf.math.sigmoid(X) - g(X)))
opt = tf.train.AdamOptimizer(learning_rate=1e-3)
train_op = opt.minimize( loss)
init_op = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init_op)

    for _ in range( int(1e4)):
        sess.run( train_op)

print( 'Non-eager run in %.1f seconds' %(time()-beg))

打印Non-eager run in 3.5 seconds

急切模式代码:

import tensorflow as tf
from time import time

tf.enable_eager_execution()

beg = time()

a = tf.Variable(-10, name='a', dtype=tf.float32)
b = tf.Variable(10, name='b', dtype=tf.float32)

def g(x):
    return tf.clip_by_value( (x-a)/(b-a), 0, 1)

X = tf.lin_space(-20., 20., 2000)

opt = tf.train.AdamOptimizer(learning_rate=1e-3)

for _ in range( int(1e4)):
    with tf.GradientTape() as tape:
        loss = tf.reduce_sum( tf.square( tf.math.sigmoid(X) - g(X)))
        grads = tape.gradient(loss, [a,b])
    opt.apply_gradients(zip(grads, [a,b]), global_step=tf.train.get_or_create_global_step())
print( 'Eager run in %.1f seconds' %(time()-beg))

打印Eager run in 20.9 seconds

我敢打赌,我急切的代码是次优的,并且tensorflow在下一个重要版本中似乎正转向急切执行,我想知道如何优化此代码以使其性能至少与第一版。

1 个答案:

答案 0 :(得分:2)

您的代码在tensorflow 2.0中看起来像(请注意,您已经可以尝试每晚构建tensorflow 2.0(https://pypi.org/project/tf-nightly-2.0-preview/))

import tensorflow as tf
from time import time

tf.enable_eager_execution()

beg = time()


@tf.function
def train():
    a = tf.Variable(-10, name='a', dtype=tf.float32)
    b = tf.Variable(10, name='b', dtype=tf.float32)

    def g(x):
        return tf.clip_by_value((x - a) / (b - a), 0, 1)

    X = tf.lin_space(-20., 20., 2000)
    opt = tf.train.AdamOptimizer(learning_rate=1e-3)

    for _ in range(int(1e4)):
        with tf.GradientTape() as tape:
            loss = tf.reduce_sum(tf.square(tf.math.sigmoid(X) - g(X)))
            grads = tape.gradient(loss, [a, b])
        opt.apply_gradients(
            zip(grads, [a, b]),
            global_step=tf.train.get_or_create_global_step())


train()
print('Eager run in %.1f seconds' % (time() - beg))

请注意,@tf.session的基础tf.contrib.eager.defunAutograph(在1.12及更高版本中可用)仍在积极开发中,并且实验性 ,因此目前的实现有点麻烦。因此,如果它无法运行或运行速度较慢,可能值得在Github上发布一个问题。

在2.0版本中,@tf.session将合并defunautograd的优点