我正在尝试使用OpenCV 2.4.3.2创建一个相机应用程序并进行一些opencv处理。我希望它能够有多个UI方向,而不仅仅是Landscape。
问题在于,当我将方向更改为纵向时,图像会侧向出现。
我在进行图像处理之前理解I could just rotate the input image(因此只将方向保留为横向),这很好并且有效,但是没有解决我的其他UI将出错的问题取向。
我也尝试使用this code来旋转相机90度,但它似乎无法正常工作。
mCamera.setDisplayOrientation(90);
它要么没有效果,要么有时只会导致预览变黑
有没有人使用OpenCV成功完成此操作?我的类扩展自JavaCameraView。
我做了一些改进,就是我在OpenCV中旋转了图像,因为它在CameraBridgeViewBase.java类中显示。
在交付和绘制框架方法中:
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
//canvas.drawBitmap(mCacheBitmap, (canvas.getWidth() - mCacheBitmap.getWidth()) / 2, (canvas.getHeight() - mCacheBitmap.getHeight()) / 2, null);
//Change to support portrait view
Matrix matrix = new Matrix();
matrix.preTranslate((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,(canvas.getHeight() - mCacheBitmap.getHeight()) / 2);
if(getResources().getConfiguration().orientation == Configuration.ORIENTATION_PORTRAIT)
matrix.postRotate(90f,(canvas.getWidth()) / 2,(canvas.getHeight()) / 2);
canvas.drawBitmap(mCacheBitmap, matrix, new Paint());
... 基本上,这只是像输入图像那样
这样更好,但我显然希望这是全屏。
答案 0 :(得分:12)
我在尝试实现OpenCV时遇到了同样的问题。我能够通过对deliverAndDrawFrame方法进行以下更改来修复它。
旋转画布对象
Canvas canvas = getHolder().lockCanvas();
// Rotate canvas to 90 degrees
canvas.rotate(90f, canvas.getWidth()/2, canvas.getHeight()/2);
在绘制之前调整位图大小以适合整个画布大小
// Resize
Bitmap bitmap = Bitmap.createScaledBitmap(mCacheBitmap, canvas.getHeight(), canvas.getWidth(), true);
// Use bitmap instead of mCacheBitmap
canvas.drawBitmap(bitmap, new Rect(0,0,bitmap.getWidth(), bitmap.getHeight()), new Rect(
(int)((canvas.getWidth() - mScale*bitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*bitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*bitmap.getWidth()) / 2 + mScale*bitmap.getWidth()),
(int)((canvas.getHeight() - mScale*bitmap.getHeight()) / 2 + mScale*bitmap.getHeight()
)), null);
// Unlock canvas
getHolder().unlockCanvasAndPost(canvas);
答案 1 :(得分:10)
实际上,您可以将宽度或高度数学父级(全屏)。
if (canvas != null) {
Bitmap bitmap = Bitmap.createScaledBitmap(mCacheBitmap, canvas.getHeight(), canvas.getWidth(), true);
canvas.rotate(90,0,0);
float scale = canvas.getWidth() / (float)bitmap.getHeight();
float scale2 = canvas.getHeight() / (float)bitmap.getWidth();
if(scale2 > scale){
scale = scale2;
}
if (scale != 0) {
canvas.scale(scale, scale,0,0);
}
canvas.drawBitmap(bitmap, 0, -bitmap.getHeight(), null);
...
此外,您可以使预览尺寸大于屏幕。只需修改比例。
答案 2 :(得分:8)
我修改了CameraBridgeViewBase.java,如下所示:
protected Size calculateCameraFrameSize(List<?> supportedSizes, ListItemAccessor accessor, int surfaceWidth, int surfaceHeight) {
int calcWidth = 0;
int calcHeight = 0;
if(surfaceHeight > surfaceWidth){
int temp = surfaceHeight;
surfaceHeight = surfaceWidth;
surfaceWidth = temp;
}
在函数“deliverAndDrawFrame”中:
if (mScale != 0) {
if(canvas.getWidth() > canvas.getHeight()) {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
} else {
canvas.drawBitmap(mCacheBitmap, rotateMe(canvas, mCacheBitmap), null);
}
其中rotateMe的定义如下:
private Matrix rotateMe(Canvas canvas, Bitmap bm) {
// TODO Auto-generated method stub
Matrix mtx=new Matrix();
float scale = (float) canvas.getWidth() / (float) bm.getHeight();
mtx.preTranslate((canvas.getWidth() - bm.getWidth())/2, (canvas.getHeight() - bm.getHeight())/2);
mtx.postRotate(90,canvas.getWidth()/2, canvas.getHeight()/2);
mtx.postScale(scale, scale, canvas.getWidth()/2 , canvas.getHeight()/2 );
return mtx;
}
预览FPS较慢,因为与横向模式相比,计算开销较小。
答案 3 :(得分:5)
不幸的是,Opencv4Android不支持人像摄像头。但是有一种方法可以克服它。
1)编写自定义相机并将其设置为纵向。
2)注册它的预览回调。
3)在onPreviewFrame(byte[]data, Camera camera)
创建预览字节Mat
:
Mat mat = new Mat(previewSize.height, previewSize.width, CvType.CV_8UC1);
mat.put(0, 0, data);
Core.transpose(mat, mat);
Core.flip(mat, mat, -1); // rotates Mat to portrait
CvType
取决于相机正在使用的预览格式。
PS。不要忘记释放您在完成后创建的所有Mat实例。
PPS。在一个单独的线程上管理你的摄像头是好的,以便在进行一些检测时不会超载UI线程。
答案 4 :(得分:3)
我有同样的问题,我已经弄清楚了!并且有我的解决方案:
作为第一部分,在CameraBridgeViewBase.Java
中,两个构造函数,添加WindowManager的初始化:
public CameraBridgeViewBase(Context context, int cameraId) {
super(context);
mCameraIndex = cameraId;
getHolder().addCallback(this);
mMaxWidth = MAX_UNSPECIFIED;
mMaxHeight = MAX_UNSPECIFIED;
windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
}
public CameraBridgeViewBase(Context context, AttributeSet attrs) {
super(context, attrs);
int count = attrs.getAttributeCount();
Log.d(TAG, "Attr count: " + Integer.valueOf(count));
TypedArray styledAttrs = getContext().obtainStyledAttributes(attrs, R.styleable.CameraBridgeViewBase);
if (styledAttrs.getBoolean(R.styleable.CameraBridgeViewBase_show_fps, false))
enableFpsMeter();
mCameraIndex = styledAttrs.getInt(R.styleable.CameraBridgeViewBase_camera_id, -1);
getHolder().addCallback(this);
mMaxWidth = MAX_UNSPECIFIED;
mMaxHeight = MAX_UNSPECIFIED;
windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
styledAttrs.recycle();
}
然后,您需要替换函数deliverAndDrawFrame(CvCameraViewFrame frame)
,如下所示,
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch (Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
int rotation = windowManager.getDefaultDisplay().getRotation();
int degrees = 0;
// config degrees as you need
switch (rotation) {
case Surface.ROTATION_0:
degrees = 90;
break;
case Surface.ROTATION_90:
degrees = 0;
break;
case Surface.ROTATION_180:
degrees = 270;
break;
case Surface.ROTATION_270:
degrees = 180;
break;
}
Matrix matrix = new Matrix();
matrix.postRotate(degrees);
Bitmap outputBitmap = Bitmap.createBitmap(mCacheBitmap, 0, 0, mCacheBitmap.getWidth(), mCacheBitmap.getHeight(), matrix, true);
if (outputBitmap.getWidth() <= canvas.getWidth()) {
mScale = getRatio(outputBitmap.getWidth(), outputBitmap.getHeight(), canvas.getWidth(), canvas.getHeight());
} else {
mScale = getRatio(canvas.getWidth(), canvas.getHeight(), outputBitmap.getWidth(), outputBitmap.getHeight());
}
if (mScale != 0) {
canvas.scale(mScale, mScale, 0, 0);
}
Log.d(TAG, "mStretch value: " + mScale);
canvas.drawBitmap(outputBitmap, 0, 0, null);
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
并添加此功能,
private float getRatio(int widthSource, int heightSource, int widthTarget, int heightTarget) {
if (widthTarget <= heightTarget) {
return (float) heightTarget / (float) heightSource;
} else {
return (float) widthTarget / (float) widthSource;
}
}
它很好,如果这个答案对你有用,请标记“已接受”#39;帮助声誉
答案 5 :(得分:2)
这里的所有答案都是黑客。我更喜欢这个解决方案:
更改JavaCameraView代码:
mBuffer = new byte[size];
mCamera.setDisplayOrientation(90); //add this
mCamera.addCallbackBuffer(mBuffer);
第二次改变:
// if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) {
// mSurfaceTexture = new SurfaceTexture(MAGIC_TEXTURE_ID);
// mCamera.setPreviewTexture(mSurfaceTexture);
// } else
// mCamera.setPreviewDisplay(null);
mCamera.setPreviewDisplay(getHolder());
答案 6 :(得分:1)
如果您使用的是openCV 2.4.9,请尝试:
1)在代码中复制opencv教程混合处理的内容;
2)纠正不匹配错误(活动名称和可能的布局参考);
3)通过添加android:screenOrientation ="landscape"
修改您的清单
4)纠正未成年人的错误并运行!!!! bbaamm(它现在应该正常工作)
注意:使用此方法,当手机处于纵向位置时,状态栏会在右侧显示。 由于我们正在开发相机项目,我建议您从预览中删除状态栏。
希望它有所帮助!
答案 7 :(得分:1)
似乎新的OpenCV CameraBridgeViewBase.java
类太高级,并且没有足够的控制相机预览的布局。看看我的sample code,它基于一些较旧的OpenCV示例并使用纯Android代码。要使用onPreviewFrame
中传递的字节数组,put()
将其转换为Mat并将YUV转换为RGB:
mYuv = new Mat(previewHeight + previewHeight/2, previewWidth, CvType.CV_8UC1);
mYuv.put(0, 0, mBuffer);
Imgproc.cvtColor(mYuv, mRgba, Imgproc.COLOR_YUV420sp2RGBA, 4);
您可以在互联网上找到旧的OpenCV4Android样本,尽管它们是在几个版本之前发布的。但是,上面链接的示例代码和代码段应足以让您入门。
答案 8 :(得分:0)
你必须考虑以下几点:
因此,对于快速和高分辨率的解决方案,我改变了JavaCameraView.java和我的JNI部分。 在JavaCameraView.java中:
...
if (sizes != null) {
/* Select the size that fits surface considering maximum size allowed */
Size frameSize;
if(width > height)
{
frameSize = calculateCameraFrameSize(sizes, new JavaCameraSizeAccessor(), width, height);
}else{
frameSize = calculateCameraFrameSize(sizes, new JavaCameraSizeAccessor(), height, width);
}
...
mCamera.setParameters(params);
params = mCamera.getParameters();
int bufFrameWidth, bufFrameHeight;
bufFrameWidth = params.getPreviewSize().width;
bufFrameHeight = params.getPreviewSize().height;
if(width > height) {
mFrameWidth = params.getPreviewSize().width;
mFrameHeight = params.getPreviewSize().height;
}else{
mFrameWidth = params.getPreviewSize().height;
mFrameHeight = params.getPreviewSize().width;
}
...
mFrameChain = new Mat[2];
mFrameChain[0] = new Mat(bufFrameHeight + (bufFrameHeight/2), bufFrameWidth, CvType.CV_8UC1);
mFrameChain[1] = new Mat(bufFrameHeight + (bufFrameHeight/2), bufFrameWidth, CvType.CV_8UC1);
AllocateCache();
mCameraFrame = new JavaCameraFrame[2];
mCameraFrame[0] = new JavaCameraFrame(mFrameChain[0], bufFrameWidth, bufFrameHeight);
mCameraFrame[1] = new JavaCameraFrame(mFrameChain[1], bufFrameWidth, bufFrameHeight);
通过这些更改,我们确保使用可用于纵向的最高结果(在calculateCameraFrameSize中切换高度/宽度)。我们仍然使用onPreviewFrame()处理landscape作为输入,但创建了一个Bitmap以纵向绘制(AllocateCache)。
最后,我们需要为算法提供纵向框架,以便让他检测“常驻”对象并将其返回以保存和渲染位图。 因此,在对您的活动进行修改后:
public Mat rot90(Mat matImage, int rotflag){
//1=CW, 2=CCW, 3=180
Mat rotated = new Mat();
if (rotflag == 1){
rotated = matImage.t();
flip(rotated, rotated, 1); //transpose+flip(1)=CW
} else if (rotflag == 2) {
rotated = matImage.t();
flip(rotated, rotated,0); //transpose+flip(0)=CCW
} else if (rotflag ==3){
flip(matImage, rotated,-1); //flip(-1)=180
} else if (rotflag != 0){ //if not 0,1,2,3:
Log.e(TAG, "Unknown rotation flag("+rotflag+")");
}
return rotated;
}
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = rot90(inputFrame.rgba(), 1);
mGray = rot90(inputFrame.gray(), 1);
...
答案 9 :(得分:0)
“jaiprakashgogi”开发人员的回答对我有用。但问题是预览仍然只保存为景观。这意味着如果我们将预览设置为imageview,那么它将显示为横向。
上述解决方案可以将预览显示为纵向,但不会持续保存为纵向。
我按照以下方式解决了这个问题。
请在此处查看我的代码......
public String writeToSDFile(byte[] data, int rotation){
byte[] portraitData=null;
if(rotation==90){
Log.i(TAG,"Rotation is : "+rotation);
Bitmap bitmap= BitmapFactory.decodeByteArray(data,0,data.length);
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmap , 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, true);
portraitData=bitmapToByte(rotatedBitmap);
}
File dir=getDirectory();
String imageTime=""+System.currentTimeMillis();
String fileName=Constants.FILE_NAME+imageTime+"."+Constants.IMAGE_FORMAT;
File file = new File(dir, fileName);
try {
FileOutputStream f = new FileOutputStream(file);
if(rotation==90){
f.write(portraitData);
}else {
f.write(data);
}
f.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
Log.i(TAG, "******* File not found. Did you" +
" add a WRITE_EXTERNAL_STORAGE permission to the manifest?");
} catch (IOException e) {
e.printStackTrace();
}
Log.i(TAG,"\n\nFile written to "+file);
return fileName;
}
// convert bitmap to Byte Array
public byte[] bitmapToByte(Bitmap bitmap){
ByteArrayOutputStream outputStream=new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG,100,outputStream);
byte[] array=outputStream.toByteArray();
return array;
}
它完全解决了我的问题。
答案 10 :(得分:0)
我在CameraBridgeViewBase中有纵向方向,但我不得不在OpenCV中更改JavaCameraView.java :(接下来的想法是:在相机初始化之后,再做下一步
setDisplayOrientation(mCamera, 90);
mCamera.setPreviewDisplay(getHolder());
和setDisplayOrientation方法
protected void setDisplayOrientation(Camera camera, int angle){
Method downPolymorphic;
try
{
downPolymorphic = camera.getClass().getMethod("setDisplayOrientation", new Class[] { int.class });
if (downPolymorphic != null)
downPolymorphic.invoke(camera, new Object[] { angle });
}
catch (Exception e1)
{
}
}
答案 11 :(得分:0)
在其他答案中,我写了个人版本的 deliverAndDrawFrame (我还通过注释通知了我代码的开始和结束位置):
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
if (BuildConfig.DEBUG) {
Log.d(TAG, "mStretch value: " + mScale);
}
// Start of the fix
Matrix matrix = new Matrix();
matrix.preTranslate( ( canvas.getWidth() - mCacheBitmap.getWidth() ) / 2f, ( canvas.getHeight() - mCacheBitmap.getHeight() ) / 2f );
matrix.postRotate( 90f, ( canvas.getWidth()) / 2f, canvas.getHeight() / 2f );
float scale = (float) canvas.getWidth() / (float) mCacheBitmap.getHeight();
matrix.postScale(scale, scale, canvas.getWidth() / 2f , canvas.getHeight() / 2f );
canvas.drawBitmap( mCacheBitmap, matrix, null );
// Back to original OpenCV code
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
预览现在处于纵向模式,如您所见:
答案 12 :(得分:0)
感谢@Kaye Wrobleski的回答。 我将其扩展为允许横向和纵向方向。 基本上,这只是一些额外的代码,可以轻松在提供横向显示效果的默认代码和他的肖像代码之间进行切换。
将他的代码作为新方法插入CameraBridgeViewBase.java
protected void deliverAndDrawFramePortrait(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
// Rotate canvas to 90 degrees
canvas.rotate(90f, canvas.getWidth()/2, canvas.getHeight()/2);
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
Log.d(TAG, "mStretch value: " + mScale);
if (mScale != 0) {
// Resize
Bitmap bitmap = Bitmap.createScaledBitmap(mCacheBitmap, canvas.getHeight(), canvas.getWidth(), true);
// Use bitmap instead of mCacheBitmap
canvas.drawBitmap(bitmap, new Rect(0,0,bitmap.getWidth(), bitmap.getHeight()), new Rect(
(int)((canvas.getWidth() - mScale*bitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*bitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*bitmap.getWidth()) / 2 + mScale*bitmap.getWidth()),
(int)((canvas.getHeight() - mScale*bitmap.getHeight()) / 2 + mScale*bitmap.getHeight())), null);
} else {
Bitmap bitmap = Bitmap.createScaledBitmap(mCacheBitmap, canvas.getHeight(), canvas.getWidth(), true);
// Use bitmap instead of mCacheBitmap
canvas.drawBitmap(bitmap, new Rect(0,0,bitmap.getWidth(), bitmap.getHeight()), new Rect(
(int)((canvas.getWidth() - bitmap.getWidth()) / 2),
(int)((canvas.getHeight() - bitmap.getHeight()) / 2),
(int)((canvas.getWidth() - bitmap.getWidth()) / 2 + bitmap.getWidth()),
(int)((canvas.getHeight() - bitmap.getHeight()) / 2 + bitmap.getHeight())), null);
}
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
然后修改JavaCameraView.java
添加新变量以跟踪我们处于纵向还是横向模式
private boolean portraitMode;
添加两种方法来设置方向模式
public void setLandscapeMode() {
portraitMode = false;
}
public void setPortraitMode() {
portraitMode = true;
}
现在替换JavaCameraView CameraWorkerClass中的这些行,run()方法
if (!mFrameChain[1 - mChainIdx].empty())
deliverAndDrawFrame(mCameraFrame[1 - mChainIdx]);
使用这些行:
if (!mFrameChain[1 - mChainIdx].empty()) {
if (!portraitMode) {
deliverAndDrawFrame(mCameraFrame[1 - mChainIdx]);
} else {
deliverAndDrawFramePortrait(mCameraFrame[1 - mChainIdx]);
}
}
要在方向之间切换,只需在JavaCameraView对象上调用setLandscapeMode()或setPortraitMode()。
请注意,反向人像和反向风景仍然会颠倒。 您需要将它们旋转180度才能使其右侧朝上,这可以通过OpenCV的warpAffine()方法轻松完成。 请注意,使用后置摄像头(LENS_FACING_BACK)时,人像模式会将图像上下翻转。
答案 13 :(得分:0)
也许这可以帮助任何人。在Android 9上使用opencv343进行了测试。现在,此全屏屏幕和“检测”脸部将以纵向和横向模式显示。 CameraBridgeViewBase类的一些小变化:
private final Matrix matrix = new Matrix();
...... 更改deliveryAndDrawFrame()方法:
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
int currentOrientation = getResources().getConfiguration().orientation;
if (currentOrientation == Configuration.ORIENTATION_LANDSCAPE) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
if (BuildConfig.DEBUG)
Log.d(TAG, "mStretch value: " + mScale);
if (mScale != 0) {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
} else {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2,
(canvas.getWidth() - mCacheBitmap.getWidth()) / 2 + mCacheBitmap.getWidth(),
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mCacheBitmap.getHeight()), null);
}
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
} else {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
int saveCount = canvas.save();
canvas.setMatrix(matrix);
mScale = Math.max((float) canvas.getHeight() / mCacheBitmap.getWidth(), (float) canvas.getWidth() / mCacheBitmap.getHeight());
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
if (mScale != 0) {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((int)((canvas.getWidth() - mCacheBitmap.getWidth()) - mCacheBitmap.getWidth())/2,
(int)(canvas.getHeight() - mScale*mCacheBitmap.getHeight() - mScale*mCacheBitmap.getHeight()/2),
(int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
(int)((canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
} else {
canvas.drawBitmap(mCacheBitmap, new Rect(0, 0, mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2,
(canvas.getWidth() - mCacheBitmap.getWidth()) / 2 + mCacheBitmap.getWidth(),
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mCacheBitmap.getHeight()), null);
}
canvas.restoreToCount(saveCount);
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
}
和MainActivity中的
public Mat rotateMat(Mat matImage) {
Mat rotated = matImage.t();
Core.flip(rotated, rotated, 1);
return rotated;
}
@Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
MatOfRect faces = new MatOfRect();
int currentOrientation = getResources().getConfiguration().orientation;
if (currentOrientation == Configuration.ORIENTATION_LANDSCAPE) {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
int height = mGray.rows();
if (Math.round(height * 0.2) > 0) {
mFaceSize = (int) Math.round(height * 0.2);
}
cascadeClassifier.detectMultiScale(mGray, faces, 1.1, 3, 2,
new Size(mFaceSize, mFaceSize));
Rect[] facesArray = faces.toArray();
for (int i = 0; i < facesArray.length; i++) {
rectangle(mRgba, facesArray[i].tl(), facesArray[i].br(), FACE_RECT_COLOR, 3);
}
} else {
mRgba = inputFrame.rgba();
mGray = rotateMat(inputFrame.gray());
if (mFaceSize == 0) {
int height = mGray.cols();
if (Math.round(height * 0.2) > 0) {
mFaceSize = (int) Math.round(height * 0.2);
}
}
Mat newMat = rotateMat(mRgba);
if(!isBackCameraOn){
flip(newMat, newMat, -1);
flip(mGray, mGray, -1);
}
if (cascadeClassifier != null)
cascadeClassifier.detectMultiScale(mGray, faces, 1.1, 3, 2, new Size(mFaceSize, mFaceSize));
mGray.release();
Rect[] facesArray = faces.toArray();
for (int i = 0; i < facesArray.length; i++) {
rectangle(newMat, facesArray[i].tl(), facesArray[i].br(), FACE_RECT_COLOR, 3);
}
Imgproc.resize(newMat, mRgba, new Size(mRgba.width(), mRgba.height()));
newMat.release();
}
if(!isBackCameraOn){
flip(mRgba, mRgba, 1);
flip(mGray, mGray, 1);
}
return mRgba;
}
答案 14 :(得分:0)
另一种解决方案。我觉得这样更好
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
if (BuildConfig.DEBUG)
Log.d(TAG, "mStretch value: " + mScale);
int currentOrientation = getResources().getConfiguration().orientation;
if (currentOrientation == Configuration.ORIENTATION_LANDSCAPE) {
if (mScale != 0) {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
} else {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2,
(canvas.getWidth() - mCacheBitmap.getWidth()) / 2 + mCacheBitmap.getWidth(),
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mCacheBitmap.getHeight()), null);
}
} else {
if (mScale != 0) {
Bitmap bitmap = Bitmap.createScaledBitmap(mCacheBitmap, canvas.getHeight(), canvas.getWidth(), true);
canvas.drawBitmap(bitmap, new Rect(0,0,bitmap.getWidth(), bitmap.getHeight()), new Rect(
(int)((canvas.getWidth() - mScale*bitmap.getWidth()) / 2),
(int)(0),
(int)((canvas.getWidth() - mScale*bitmap.getWidth()) / 2 + mScale*bitmap.getWidth()),
(int)((canvas.getHeight()))), null);
} else {
Bitmap bitmap = Bitmap.createScaledBitmap(mCacheBitmap, canvas.getHeight(), canvas.getWidth(), true);
canvas.drawBitmap(bitmap, new Rect(0,0,bitmap.getWidth(), bitmap.getHeight()), new Rect(
(int)((canvas.getWidth() - bitmap.getWidth()) / 2),
(int)(0),
(int)((canvas.getWidth() - bitmap.getWidth()) / 2 + bitmap.getWidth()),
(int)(canvas.getHeight())), null);
}
}
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
和...
@Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
MatOfRect faces = new MatOfRect();
int currentOrientation = getResources().getConfiguration().orientation;
if (currentOrientation == Configuration.ORIENTATION_LANDSCAPE) {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
int height = mGray.rows();
if (Math.round(height * 0.2) > 0) {
mFaceSize = (int) Math.round(height * 0.2);
}
cascadeClassifier.detectMultiScale(mGray, faces, 1.1, 3, 2,
new Size(mFaceSize, mFaceSize));
Rect[] facesArray = faces.toArray();
for (int i = 0; i < facesArray.length; i++) {
Point center = new Point(facesArray[i].x + facesArray[i].width / 2,
facesArray[i].y + facesArray[i].height / 2);
rectangle(mRgba, facesArray[i].tl(), facesArray[i].br(), FACE_RECT_COLOR, 3);
}
} else {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
Mat rotImage = Imgproc.getRotationMatrix2D(new Point(mRgba.cols() / 2,
mRgba.rows() / 2), 90, 1.0);
Imgproc.warpAffine(mRgba, mRgba, rotImage, mRgba.size());
Imgproc.warpAffine(mGray, mGray, rotImage, mRgba.size());
Core.flip(mRgba, mRgba, 1);
Core.flip(mGray, mGray, 1);
int height = mGray.rows();
if (Math.round(height * 0.2) > 0) {
mFaceSize = (int) Math.round(height * 0.2);
}
cascadeClassifier.detectMultiScale(mGray, faces, 1.1, 3, 2,
new Size(mFaceSize, mFaceSize));
Rect[] facesArray = faces.toArray();
for (int i = 0; i < facesArray.length; i++) {
Point center = new Point(facesArray[i].x + facesArray[i].width / 2,
facesArray[i].y + facesArray[i].height / 2);
rectangle(mRgba, facesArray[i].tl(), facesArray[i].br(), FACE_RECT_COLOR, 3);
}
}
return mRgba;
答案 15 :(得分:0)
我不认为有办法做到这一点,没有一些像素操作。然而,如果我们简单地修改所有这些像素被绘制到的矩阵。答案部分在于 CameraBridgeViewBase.java 文件
1.转到 CameraBridgeViewBase 类
2.制作函数更新矩阵
private final Matrix mMatrix = new Matrix();
private void updateMatrix() {
float mw = this.getWidth();
float mh = this.getHeight();
float hw = this.getWidth() / 2.0f;
float hh = this.getHeight() / 2.0f;
float cw = (float)Resources.getSystem().getDisplayMetrics().widthPixels; //Make sure to import Resources package
float ch = (float)Resources.getSystem().getDisplayMetrics().heightPixels;
float scale = cw / (float)mh;
float scale2 = ch / (float)mw;
if(scale2 > scale){
scale = scale2;
}
boolean isFrontCamera = mCameraIndex == CAMERA_ID_FRONT;
mMatrix.reset();
if (isFrontCamera) {
mMatrix.preScale(-1, 1, hw, hh); //MH - this will mirror the camera
}
mMatrix.preTranslate(hw, hh);
if (isFrontCamera){
mMatrix.preRotate(270);
} else {
mMatrix.preRotate(90);
}
mMatrix.preTranslate(-hw, -hh);
mMatrix.preScale(scale,scale,hw,hh);
}
3.覆盖 onMeasure 和布局功能
@Override
public void layout(int l, int t, int r, int b) {
super.layout(l, t, r, b);
updateMatrix();
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
updateMatrix();
}
4.替换现有的 deliveryAndDrawFrame 函数
protected void deliverAndDrawFrame(CvCameraViewFrame frame) { //replaces existing deliverAndDrawFrame
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
int saveCount = canvas.save();
canvas.setMatrix(mMatrix);
if (mScale != 0) {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
} else {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2,
(canvas.getWidth() - mCacheBitmap.getWidth()) / 2 + mCacheBitmap.getWidth(),
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mCacheBitmap.getHeight()), null);
}
//Restore canvas after draw bitmap
canvas.restoreToCount(saveCount);
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
答案 16 :(得分:-1)
我不太清楚,但是相机的尺寸取决于屏幕的宽度。由于屏幕宽度较小,因此相机的高度也取决于纵向方向的宽度。因此,相机分辨率也取决于低分辨率。然后放下预览图像(在CameraBridgeViewBase.java中,预览图像的旋转取决于相机图像的宽度和高度)。
作为解决方案,请使用横向方向(在manifest.xml中将横向模式确定为Activity)。结果,由于屏幕宽度较大,因此高度也将较高,并且您的应用决定高分辨率。另外,您不必旋转相机图像,也不必始终以全屏模式显示。但是缺点是起点不同。我尝试了将高分辨率图像作为纵向方向的多种方法,但找不到方法。
我的应用:纵向
我的相机图像是720、480 /横向1280、1080。
答案 17 :(得分:-3)
按照this page的要求在JavaCameraView.java
中修改代码
这真的很容易修复。
Log.d(TAG, "startPreview");
mCamera.startPreview();
Log.d(TAG, "startPreview");
setDisplayOrientation(mCamera, 90);
mCamera.setPreviewDisplay(getHolder());
mCamera.startPreview();