我看看yotube上的移动和嵌入式TensorFlow(TensorFlow Dev Summit 2017)视频,在这里video link。
在视频中,我学习了一些在Android上减少张量流量以便文件大小的功能。
我在这里
"""Prints a header file to be used with SELECTIVE_REGISTRATION.
Example usage:
print_selective_registration_header \
--graphs=path/to/graph.pb > ops_to_register.h
Then when compiling tensorflow, include ops_to_register.h in the include
search path and pass -DSELECTIVE_REGISTRATION - see
core/framework/selective_registration.h for more details.
"""
* .pb文件是我自己,然后我在这里得到ops_to_register.h文件
#ifndef OPS_TO_REGISTER
#define OPS_TO_REGISTER
constexpr inline bool ShouldRegisterOp(const char op[]) {
return false
|| (strcmp(op, "Add") == 0)
|| (strcmp(op, "Const") == 0)
|| (strcmp(op, "Conv2D") == 0)
|| (strcmp(op, "Exp") == 0)
|| (strcmp(op, "Identity") == 0)
|| (strcmp(op, "Max") == 0)
|| (strcmp(op, "MaxPool") == 0)
|| (strcmp(op, "NoOp") == 0)
|| (strcmp(op, "Placeholder") == 0)
|| (strcmp(op, "RealDiv") == 0)
|| (strcmp(op, "Relu") == 0)
|| (strcmp(op, "Reshape") == 0)
|| (strcmp(op, "Sub") == 0)
|| (strcmp(op, "Sum") == 0)
|| (strcmp(op, "_Recv") == 0)
|| (strcmp(op, "_Send") == 0)
;
}
#define SHOULD_REGISTER_OP(op) ShouldRegisterOp(op)
const char kNecessaryOpKernelClasses[] = ","
"BinaryOp< CPUDevice, functor::add<float>>,"
"ConstantOp,"
"Conv2DOp<CPUDevice, float>,"
"UnaryOp< CPUDevice, functor::exp<float>>,"
"IdentityOp,"
"ReductionOp<CPUDevice, float, Eigen::internal::MaxReducer<float>>,"
"MaxPoolingOp<CPUDevice, float>,"
"NoOp,"
"PlaceholderOp,"
"BinaryOp< CPUDevice, functor::div<float>>,"
"ReluOp<CPUDevice, float>,"
"ReshapeOp,"
"BinaryOp< CPUDevice, functor::sub<float>>,"
"ReductionOp<CPUDevice, float, Eigen::internal::SumReducer<float>>,"
"RecvOp,"
"SendOp,"
;
#define SHOULD_REGISTER_OP_KERNEL(clz) (strstr(kNecessaryOpKernelClasses, "," clz ",") != nullptr)
#define SHOULD_REGISTER_OP_GRADIENT false
#endif
我将ops_to_register.h放在tensorflow / tensorflow / core / framework目录中,并在selective_registration.h中定义SELECTIVE_REGISTRATION。
然后我跑
bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so --crosstool_top=//external:android/crosstool --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cpu=armeabi-v7a --verbose_failures
在android项目中,我使用libtensorflow_inference.so运行我的.pb模型,但获取失败的信息:
native: tensorflow_inference_jni.cc:145 Could not create TensorFlow graph: Invalid argument: No OpKernel was registered to support Op 'Add' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
[[Node: add_1 = Add[T=DT_FLOAT](Conv2D, Reshape)]]
答案 0 :(得分:0)
这个错误因为tesorflow分支中的一些问题有一些错误,解决问题很容易。
BinaryOp< CPUDevice, functor::div<float>> change to
BinaryOp<CPUDevice, functor::div<float>>, the change is no space。