如何在没有内存不足错误的情况下在android中加载张量模型?

时间:2018-12-26 20:41:35

标签: android opencv tensorflow

我想在android中使用冻结的张量模型而不会出现内存不足错误,那么有人可以帮助处理它吗?

import android.content.res.AssetManager;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Color;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import org.opencv.android.OpenCVLoader;
import org.opencv.android.Utils;
import org.opencv.core.Mat;
import org.tensorflow.contrib.android.TensorFlowInferenceInterface;

public class MainActivity extends AppCompatActivity {

    static {
        System.loadLibrary("tensorflow_inference");
    }

    static {
        OpenCVLoader.initDebug();
    }

    private TensorFlowInferenceInterface inferenceInterface;
    private static final String MODEL_FILE = "file:///android_asset/final3.pb";
    private static AssetManager assetManager;
    private static final String INPUT_NODE = "input:0"; // input tensor name
    private static final String OUTPUT_NODE = "embeddings:0"; // output tensor name
    private static final String[] OUTPUT_NODES = {"embeddings:0"};
    private static final int OUTPUT_SIZE = 3; // number of classes

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        this.assetManager = this.getAssets();
        inferenceInterface = new TensorFlowInferenceInterface(assetManager, MODEL_FILE);

        Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.test_side);
        Mat mat = new Mat(bmp.getWidth(), bmp.getHeight(), org.opencv.core.CvType.CV_8UC4);
        Bitmap bmp32 = bmp.copy(Bitmap.Config.ARGB_8888, true);
        Utils.bitmapToMat(bmp32, mat);
        float[] arr = getResult(getPixels(bmp32),bmp32);
        for (float result:arr)
            System.out.println(result);

    }

    public float[] getPixels(Bitmap bitmap){

        float[] result = new float[bitmap.getHeight()*bitmap.getWidth()*3];//bitmap.getWidth()*bitmap.getHeight()*3];
        int pixelCount = 0;

        for (int y = 0; y < bitmap.getHeight(); y++)
        {
            for (int x = 0; x < bitmap.getWidth(); x++)
            {
                int c = bitmap.getPixel(y,x);
                result[pixelCount] = Color.red(c);
                pixelCount++;
                result[pixelCount] = Color.green(c);
                pixelCount++;
                result[pixelCount] = Color.blue(c);
                pixelCount++;
            }
        }
        return result;
    }

    public float[] getResult(float[] input_signal,Bitmap bitmap) {

        float[] result = new float[OUTPUT_SIZE];// get the output probabilities for each class
        inferenceInterface.feed(INPUT_NODE, input_signal, 1, bitmap.getWidth(), bitmap.getHeight(), 3);
        inferenceInterface.feed("phase_train", new boolean[]{false});
        inferenceInterface.run(OUTPUT_NODES);
        inferenceInterface.fetch(OUTPUT_NODE, result);
        return result;
    }
}

我预期的结果将描述每个班级的知己,但出现此错误:

  

W /本机:op_kernel.cc:1273 OP_REQUIRES在conv_ops.cc:446处失败:   资源耗尽:分配张量时使用OOM   shape [1,1357,1357,32]并在类型上输入float   / job:本地主机/副本:0 /任务:0 /设备:CPU:0通过分配器cpu   E / TensorFlowInferenceInterface:无法运行TensorFlow推理   输入:[input:0,phase_train],输出:[emddings:0]   D / AndroidRuntime:关闭VM E / AndroidRuntime:致命异常:   主要                     流程:com.example.intern_b.myapplication,PID:16592                     java.lang.RuntimeException:无法启动活动ComponentInfo {com.example.intern_b.myapplication / com.example.intern_b.myapplication.MainActivity}:   java.lang.IllegalStateException:在分配张量时使用OOM   shape [1,1357,1357,32]并在类型上输入float   / job:本地主机/副本:0 /任务:0 /设备:CPU:0通过分配器cpu                          [[{{node InceptionResnetV1 / Conv2d_2a_3x3 / Conv2D}} = Conv2D [T = DT_FLOAT,   data_format =“ NHWC”,膨胀= [1,1,1,1],padding =“ VALID”,   步幅= [1,1,1,1],use_cudnn_on_gpu = true,   _device =“ / job:localhost / replica:0 / task:0 / device:CPU:0”](InceptionResnetV1 / Conv2d_1a_3x3 / Relu,   InceptionResnetV1 / Conv2d_2a_3x3 / weights / read)]]                     提示:如果要在发生OOM时查看分配的张量的列表,请将report_tensor_allocations_upon_oom添加到RunOptions   有关当前分配信息。

0 个答案:

没有答案