我使用Faster RCNN构建了一个对象检测模型,并能够生成Frozen_Graph.pb文件。现在,我正在尝试将.pb文件转换为TFlite文件,以便在Android上使用它。但是我在转换时面临问题。由于需要输入张量和输出传感器进行转换。
我无法找出要传递的正确输入和输出数组。即使我将输入张量作为Image_tensor
传递,也会引发错误说明
ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor:0' has invalid shape '[None, None, None, 3]
下面是我用于转换的代码:
graph_def_file = "/models/mobilenet_thin_model.pb"
input_arrays = ["image_tensor"]
output_arrays = ["Softmax"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file,input_arrays, output_arrays)
tflite_model = converter.convert()
open("/models/converted_model.tflite", "wb").write(tflite_model)
答案 0 :(得分:0)
您必须打开图形并进行分析。然后打印每个节点。
import tensorflow as tf
Graph = tf.GraphDef()
File = open("Model.pb","rb")
Graph.ParseFromString(File.read())
for Layer in Graph.node:
print(Layer.name)
或者直接获取最后一个
print(Graph.node[-1].name)
顺便说一句。您是如何在应用程序中实现TensorFlow的?使用import org.tensorflow.lite.Interpreter
时不需要名称。您只需使用输入和输出缓冲区调用run
。
基于TensorFlow示例实现的我示例:
private ByteBuffer _mInput;
private float[][] _mOutput;
Interpreter.Options _mTfliteOptions = new Interpreter.Options();
_mTfliteOptions.setNumThreads(1);
_mTfliteOptions.setUseNNAPI(true);
_mInput = ByteBuffer.allocateDirect(4 * 1 * Width * Height * Channels);
_mInput.order(ByteOrder.nativeOrder());
_mOutput = new float[1][_mClassLabels.size()];
_mTfLite = new Interpreter(loadModelFile(getActivity().getAssets(),"Model.tflite"), _mTfliteOptions);
_mTfLite.run(_mInput, _mOutput);
private MappedByteBuffer loadModelFile(AssetManager Manager, String Path) throws IOException
{
AssetFileDescriptor fileDescriptor = Manager.openFd(Path);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}