Tensorflow,在多处理中更新权重

时间:2018-08-01 15:09:02

标签: python tensorflow

我定义了一个网络,每个范围都包含每个进程的权重,每个进程都分配了相应的权重,这是我的演示代码

from multiprocessing import Process

import tensorflow as tf


def init_network(name):
    with tf.name_scope(name):
        x = tf.Variable(int(name))
        return x


def f(name, sess):
    print('step into f()')
    vars = tf.trainable_variables(name)
    print(sess.run(vars[0]))
    sess.run(vars[0].assign(int(name)+10))


if __name__ == '__main__':
    sess = tf.Session()
    x1 = init_network('1')
    x2 = init_network('2')
    sess.run(tf.global_variables_initializer())
    p1 = Process(target=f, args=('1', sess))
    p2 = Process(target=f, args=('2', sess))

    p1.start()
    p2.start()

    p1.join()
    p2.join()
    print(sess.run([x1, x2]))

演示代码卡住了,看来sess无法在不同的进程中共享,如何在多处理设置中更新权重?

1 个答案:

答案 0 :(得分:1)

谷歌搜索一段时间后,我发现multiprocessing不适用于TensorFlow,因此,我使用了threading

from threading import Thread

import tensorflow as tf

def init_network(name):
    with tf.name_scope(name):
        x = tf.Variable(int(name))
        return x

def f(name, sess):
    with sess.as_default(), sess.graph.as_default():
        print('step into f()')
        vars = tf.trainable_variables(name)
        print(vars)
        sess.run(vars[0].assign(int(name)+10))
        print(sess.run(vars[0]))


if __name__ == '__main__':
    sess = tf.Session()
    coord = tf.train.Coordinator()

    x1 = init_network('1')
    x2 = init_network('2')
    sess.run(tf.global_variables_initializer())
    print(sess.run([x1, x2]))

    p1 = Thread(target=f, args=('1', sess))
    p2 = Thread(target=f, args=('2', sess))
    p1.start()
    p2.start()
    coord.join([p1, p2])
    print(sess.run([x1, x2]))

现在可以使用,默认会话是当前线程的属性。如果创建一个新线程并希望在该线程中使用默认会话,则必须在该线程的函数中显式添加一个with sess.as_default():。而且,您必须明确输入with sess.graph.as_default():块以使sess.graph成为默认图形。

tf.train.Coordinator可以很方便地加入线程。也可以使用thread.join()方法来加入线程。