为什么Executor.asCoroutineDispatcher不能像newFixedThreadPoolContext一样工作?

时间:2018-12-19 16:28:31

标签: multithreading kotlin kotlinx.coroutines

我认为这两行等效于执行:

import tensorflow as tf

## Graph definition for model

# set up tf.placeholders for inputs x, and outputs y_
# these remain fixed during training and can have values fed to them during the session
with tf.name_scope("Placeholders"):
    x = tf.placeholder(tf.float32, shape=[None, 2], name="x")   # input layer
    y_ = tf.placeholder(tf.float32, shape=[None, 2], name="y_") # output layer

# set up tf.Variables for the weights at each layer from l1 to l3, and setup feeding of initial values
# also set up mask as a variable and set it to be un-trianable
with tf.name_scope("Variables"):
    w_l1_values = [[0, 0.25],[0.2,0.3]]
    w_l1 = tf.Variable(w_l1_values, name="w_l1")
    w_l2_values = [[0.4,0.5],[0.45, 0.55]]
    w_l2 = tf.Variable(w_l2_values, name="w_l2")

    mask_values = [[0., 1.], [1., 1.]]
    mask = tf.Variable(mask_values, trainable=False, name="mask")


# link each set of weights as matrix multiplications in the graph. Inlcude an elementwise multiplication by mask.
# Sequence takes us from inputs x to output final_out, which will be compared to labels fed to placeholder y_
l1_out = tf.nn.relu(tf.matmul(x, tf.multiply(w_l1, mask)), name="l1_out")
final_out = tf.nn.relu(tf.matmul(l1_out, w_l2), name="output")


## define loss function and training operation
with tf.name_scope("Loss"):
    # some loss defined as a function of graph output: final_out and labels: y_
    loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=final_out, labels=y_, name="loss")

with tf.name_scope("Train"):
    # some optimisation strategy, arbitrary learning rate
    optimizer = tf.train.AdamOptimizer(learning_rate=0.001, name="optimizer_adam")
    train_op = optimizer.minimize(loss, name="train_op")


# create session, initialise variables and train according to inputs and corresponding labels
# This should show that the values of the first layer weights change, but the one set to 0 remains at 0
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    initial_l1_weights = sess.graph.get_tensor_by_name("Variables/w_l1:0")
    print(initial_l1_weights.eval())

    inputs = [[0.05, 0.10]]
    labels = [[0.01, 0.99]]
    ans = sess.run(train_op, feed_dict={"Placeholders/x:0": inputs, "Placeholders/y_:0": labels})

    train_steps = 1
    for i in range(train_steps):
        initial_l1_weights = sess.graph.get_tensor_by_name("Variables/w_l1:0")
    print(initial_l1_weights.eval())

但是当我在下面的代码中使用“ context2”时,它可以按预期工作,但是“ context1”的作用类似于单个线程。

val context1 = Executors.newFixedThreadPool(2).asCoroutineDispatcher()
val context2 = newFixedThreadPoolContext(2, "Fixed")

预期:两个线程将并行执行。该代码应在4秒钟内完成,因为要使2个线程休眠2秒钟,然后重复执行。

实际:仅执行一个线程,然后依次执行“重复”,这需要8秒钟才能完成。

这是一个错误吗?

或者找到here的本文档是什么意思?

  

如果您需要带有调度策略的完全独立的线程池   基于标准JDK执行程序的代码,请使用以下代码   表达式:Executors.newFixedThreadPool()。asCoroutineDispatcher()。

1 个答案:

答案 0 :(得分:0)

使用完整的代码示例后:

import kotlinx.coroutines.*
import java.util.concurrent.Executors

fun log(msg: String) = println("[${Thread.currentThread().name}] $msg")


fun main() = runBlocking {
    val context1 = Executors.newFixedThreadPool(2).asCoroutineDispatcher()
    val context2 = newFixedThreadPoolContext(2, "Fixed")

    repeat(4) {
        launch {
            withContext(context1) {
                log("Start sleep $it")
                Thread.sleep(2000)
                log("Finished sleep $it")
            }
        }
    }

//    context1.close()
}

我发现问题出在“ context1.close()”上。如果我注释掉“ context1.close()”,它将正常工作。我的猜测是“启动”调用没有阻塞,因此“ context1.close()”要在其他线程上执行“ withContext”之前执行。我本来以为会导致错误,但似乎只是使其成为一个线程。