并行运行模型的多个克隆

时间:2018-02-16 03:13:17

标签: python multithreading tensorflow keras evolutionary-algorithm

所以我正在尝试使用Evolution Strategy实现强化学习算法。

原则是将原始模型克隆N次(比方说100次),对这100个克隆应用一些噪音,运行它们,检查哪些效果最佳,并使用它来更新原始模型。

现在我试图将每个克隆放在一个不同的线程中并将它们全部并行运行。

这是我的工人阶级:

class WorkerThread(Thread):

    def __init__(self, action_dim, img_dim, sigma, sess):
        Thread.__init__(self)
        #sess = tf.Session()
        self.actor = ActorNetwork(sess, action_dim, img_dim)
        self.env = Environment()
        self.reward = 0
        self.N = {}
        self.original_model = None
        self.sigma = sigma

    def setActorModel(self, model):
        self.original_model = model

    def run(self):
        k = 0
        for l in self.actor.model.layers:
            if len(np.array(l.get_weights())) > 0:
                # First generate some noise
                shape = (np.array(l.get_weights()[0])).shape
                if len(shape) == 2:
                    self.N[k] = np.random.randn(shape[0], shape[1])
                else:
                    self.N[k] = np.random.randn(shape[0], shape[1], shape[2], shape[3])
                # 2nd set weights using original model's weights and noise
                la = self.original_model.layers[k]
                self.actor.model.layers[k].set_weights((la.get_weights()[0] + self.sigma * self.N[k], la.get_weights()[1]))

            k += 1

        ob = self.env.reset()

        while True:
            action = self.actor.predict(np.reshape(ob['image'], (1, 480, 480, 3)))
            ob = self.env.step(action[0])

            if ob['done']:
                self.reward = ob['reward']
                break

因此每个工作线程都有自己的模型,运行时我使用原始模型权重设置权重。

此时我收到以下错误

  File "/usr/local/lib/python3.6/site-packages/keras/engine/topology.py", line 1219, in set_weights
    K.batch_set_value(weight_value_tuples)
  File "/usr/local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2365, in batch_set_value
    assign_op = x.assign(assign_placeholder)
  File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 594, in assign
    return state_ops.assign(self._variable, value, use_locking=use_locking)
  File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 276, in assign
    validate_shape=validate_shape)
  File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 59, in assign
    use_locking=use_locking, name=name)
  File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 350, in _apply_op_helper
    g = ops._get_graph_from_inputs(_Flatten(keywords.values()))
  File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 5055, in _get_graph_from_inputs
    _assert_same_graph(original_graph_element, graph_element)
  File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 4991, in _assert_same_graph
    original_item))
ValueError: Tensor("Placeholder:0", shape=(5, 5, 3, 24), dtype=float32) must be from the same graph as Tensor("conv2d_11/kernel:0", shape=(5, 5, 3, 24), dtype=float32_ref).

在上面的代码示例中,我在所有线程中使用相同的tensorflow会话。我尝试为每个会话创建一个不同的会话,但我得到了相同的错误。

我对tensorflow知之甚少,有谁知道如何解决这个问题?

1 个答案:

答案 0 :(得分:0)

您需要在所有线程中使用相同的图形。在主线程中创建一个tf.Graph()并使用my_graph.as_default()将每线程函数包装在"中:"。