Tensorflow:Android演示准确度

时间:2017-01-08 02:27:47

标签: tensorflow

通过在网上搜索您的问题,您找到了哪些相关的GitHub问题或Stack Overflow线程?

我搜索了#1269#504

环境信息

Mac OS for build和Android版本5运行.apk演示。

如果可能的话,提供一个可重复性最小的示例(我们通常没有时间阅读数百行代码)

我按照#1269中提到的步骤操作,并且能够成功运行示例,但结果的准确性非常低且经常出错。我用25种不同的日常用品训练我的系统,如肥皂,汤,面条等。 当我使用以下脚本运行相同的示例时,它给我非常高的准确度(大约90-95%)

import sys
import tensorflow as tf
// change this as you see fit
image_path = sys.argv[1]

// Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()

// Loads label file, strips off carriage return
label_lines = [line.rstrip() for line 
                   in tf.gfile.GFile("/tf_files/retrained_labels.txt")]

// Unpersists graph from file
with tf.gfile.FastGFile("/tf_files/retrained_graph.pb", 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    _ = tf.import_graph_def(graph_def, name='')

with tf.Session() as sess:
    // Feed the image_data as input to the graph and get first prediction
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')

    predictions = sess.run(softmax_tensor, \
             {'DecodeJpeg/contents:0': image_data})

    // Sort to show labels of first prediction in order of confidence
    top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]

    for node_id in top_k:
        human_string = label_lines[node_id]
        score = predictions[0][node_id]
        print('%s (score = %.5f)' % (human_string, score))

我在这里看到的唯一区别是Android演示中使用的模型文件被剥离,因为它不支持DecodeJpeg,而在上面的代码中它实际生成了未剥离的模型。是否有任何具体原因或我错在哪里?

我也尝试过使用optimize_for_inference

但不幸的是,它失败并出现以下错误:

[milinddeore@P028: ~/tf/tensorflow ] bazel-bin/tensorflow/python/tools/optimize_for_inference --input=/Users/milinddeore/tf_files_nm/retrained_graph.pb --output=/Users/milinddeore/tf/tensorflow/tensorflow/examples/android/assets/tf_ul_stripped_graph.pb --input_names=DecodeJpeg/content —-output_names=final_result
Traceback (most recent call last):
  File "/Users/milinddeore/tf/tensorflow/bazel-bin/tensorflow/python/tools/optimize_for_inference.runfiles/org_tensorflow/tensorflow/python/tools/optimize_for_inference.py", line 141, in <module>
    app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/Users/milinddeore/tf/tensorflow/bazel-bin/tensorflow/python/tools/optimize_for_inference.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/Users/milinddeore/tf/tensorflow/bazel-bin/tensorflow/python/tools/optimize_for_inference.runfiles/org_tensorflow/tensorflow/python/tools/optimize_for_inference.py", line 90, in main
    FLAGS.output_names.split(","), FLAGS.placeholder_type_enum)
  File "/Users/milinddeore/tf/tensorflow/bazel-bin/tensorflow/python/tools/optimize_for_inference.runfiles/org_tensorflow/tensorflow/python/tools/optimize_for_inference_lib.py", line 91, in optimize_for_inference
    placeholder_type_enum)
  File "/Users/milinddeore/tf/tensorflow/bazel-bin/tensorflow/python/tools/optimize_for_inference.runfiles/org_tensorflow/tensorflow/python/tools/strip_unused_lib.py", line 71, in strip_unused
    output_node_names)
  File "/Users/milinddeore/tf/tensorflow/bazel-bin/tensorflow/python/tools/optimize_for_inference.runfiles/org_tensorflow/tensorflow/python/framework/graph_util_impl.py", line 141, in extract_sub_graph
    assert d in name_to_node_map, "%s is not in graph" % d
AssertionError:  is not in graph

我怀疑这个问题是由于android没有解析DecodeJpeg,但如果我错了请纠正我。

您尝试了哪些其他尝试的解决方案?

是的,我上面的脚本,它给了我相当高的准确度结果。

1 个答案:

答案 0 :(得分:0)

嗯,准确性差的原因如下:

我在联想Vibe K5手机上运行了这个示例代码(这有SanpDragon 415),这不是为六边形DSP编译的,即使通过415上的DSP与835(Hexagon DSP 682)相比已经很老了,实际上我不太确定Hexagon SDK是否可以与415一起使用,我还没试过。这意味着该示例在CPU上运行以首先检测运动,然后对它们进行分类,从而导致性能不佳。

  1. 慢速FPS,会非常缓慢地捕捉图像,因此移动物体将非常困难。
  2. 因此,如果你的形象不好,那么预测的可能性也很大。
  3. 相机捕获和分类需要很长时间,因为延迟并不是非常实时。