我想跟踪从相机接收到的点的运动。 对于这一点,我使用的OpenCV。在onPreviewFrame中,我得到一个字节数组,将其转换为矩阵,然后执行模式搜索。我创建了第二个矩阵并将其剪切到所需区域-因此我创建了搜索模板。我按模板开始搜索。但是结果不是所需的。
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
if (scanSpace) {
mainActivity = new MainActivity();
Log.d(TAG,"OnPreviewFRAME_START");
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
// YUV -> Bitmap
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
Bitmap bmp = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
// Bitmap -> Mat
Mat fImage = new Mat();
bmp = bmp.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp, fImage);
// Crop image for template
Mat crop_fImage = mainActivity.cropImage(fImage);
Mat result = new Mat();
Imgproc.matchTemplate(fImage, crop_fImage, result, Imgproc.TM_SQDIFF_NORMED);
Core.MinMaxLocResult r = Core.minMaxLoc(result);
Log.d(TAG, "r.minLoc: " + String.valueOf(r.minLoc) + " " + "r.maxLoc: " + String.valueOf(r.maxLoc));
Mat result2 = new Mat();
Imgproc.matchTemplate(fImage, crop_fImage, result2, Imgproc.TM_CCOEFF);
Core.normalize(result2, result2, 0, 1, Core.NORM_MINMAX, -1, new Mat());
Core.MinMaxLocResult r2 = Core.minMaxLoc(result2);
Log.d(TAG, "r2.minLoc: " + String.valueOf(r2.minLoc) + " " + "r2.maxLoc: " + String.valueOf(r2.maxLoc));
// Draw react around
xPoint_1 = (int) r.minLoc.x;
yPoint_1 = (int) r.minLoc.y;
xPoint_2 = (int) r.maxLoc.x;
xPoint_2 = (int) r.maxLoc.y;
scanSpace = false;
camera.setPreviewCallback(this);
}
通常在其他位置而不是必要位置绘制矩形。
示例: 用户选择一个对象,然后在该对象上绘制一个蓝色矩形。结果-红色矩形-表示找到了所需的对象。 https://i.imgur.com/XI2JRb9.png https://i.imgur.com/Kb0FjPa.png