我已经从源代码构建Tensorflow,并且正在使用它的C API。到目前为止,一切工作正常,我也在使用AVX / AVX2。我的Tensorflow从源代码构建也是在XLA支持下构建的。我现在也想激活XLA(加速线性代数),因为我希望它会在推理过程中再次提高性能/速度。
如果我现在开始跑步,我会收到以下消息:
2019-06-17 16:09:06.753737: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1541] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
在XLA官方主页(https://www.tensorflow.org/xla/jit)上,我找到了有关如何在会话级别打开jit的信息:
# Config to turn on JIT compilation
config = tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
sess = tf.Session(config=config)
此处(https://github.com/tensorflow/tensorflow/issues/13853)说明了如何在C API中设置TF_SetConfig。在使用此Python代码的输出之前,我能够限制为一个内核:
config1 = tf.ConfigProto(device_count={'CPU':1})
serialized1 = config1.SerializeToString()
print(list(map(hex, serialized1)))
我将其实现如下:
uint8_t intra_op_parallelism_threads = maxCores; // for operations that can be parallelized internally, such as matrix multiplication
uint8_t inter_op_parallelism_threads = maxCores; // for operations that are independent in your TensorFlow graph because there is no directed path between them in the dataflow graph
uint8_t config[]={0x10,intra_op_parallelism_threads,0x28,inter_op_parallelism_threads};
TF_SetConfig(sess_opts,config,sizeof(config),status);
因此,我认为这将有助于XLA激活:
config= tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
output = config.SerializeToString()
print(list(map(hex, output)))
这次实施:
uint8_t config[]={0x52,0x4,0x1a,0x2,0x28,0x1};
TF_SetConfig(sess_opts,config,sizeof(config),status);
但是XLA似乎仍然被停用。有人可以帮我解决这个问题吗?或者,如果您再次对战利品有好感:
2019-06-17 16:09:06.753737: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1541] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
这是否意味着我必须在构建期间设置XLA_FLAGS?
谢谢!
答案 0 :(得分:0)
好吧,我弄清楚了如何使用XLA JIT,它仅在c_api_experimental.h标头中可用。只需包含此标头,然后使用:
TF_EnableXLACompilation(sess_opts,true);
答案 1 :(得分:0)
@ tre95我已经尝试过
#include "c_api_experimental.h"
TF_SessionOptions* options = TF_NewSessionOptions();
TF_EnableXLACompilation(options,true);
但是它编译失败并出现错误 collect2:错误:ld返回了1个退出状态。但是,如果我不这样做,它可以编译并成功运行。