我正在尝试使用来自the BoofCV Android Demo的给定示例的BoofCV线路检测。为此,我复制了这些类,并使用Android的Camera API设置了所有内容。 虽然演示正在使用横向方向,但我的活动需要在纵向,但设置时相机向左旋转90°。当我尝试相应地设置相机时,没有任何反应。我用过:
Camera.setDisplayOrientation(90)
Camera.setParameters("orientation", "portrait")
过了一段时间我发现它与设备无关(在不同设备和API级别上测试过)并且它与Camera API没有任何关系(因为我在评论时设法得到了它退出VideoProcessor.init()
功能。)
尝试了一段时间后,我仍然无法弄清楚为什么VideoProcessor
一直向左旋转图像......
以下是VideoProcessor
的代码:
public class LineProcessor extends Thread implements VideoProcessing {
/**
* Lock for reading and writing images with processing and render
*/
private final Object lockGui = new Object();
/**
* Lock used when converting the video stream.
*/
private final Object lockConvert = new Object();
private Paint mPaint;
private ImageType<GrayU8> imageType;
private GrayU8 image;
private GrayU8 image2;
private volatile boolean requestStop = false;
private volatile boolean running = false;
private int outputWidth;
private int outputHeight;
private View view;
private Thread thread;
private DetectLine detector;
private FastQueue<LineSegment2D_F32> lines = new FastQueue<LineSegment2D_F32>(LineSegment2D_F32.class,true);
private Bitmap bitmap;
private byte[] storage;
private double scale;
private double tranX,tranY;
/**
* Creates a new Line Processor from a Line Detector
* @param detector the Line Detector Segment
*/
public LineProcessor(DetectLine detector) {
this.imageType = ImageType.single(GrayU8.class);
this.detector = detector;
mPaint = new Paint();
mPaint.setColor(Color.RED);
mPaint.setStyle(Paint.Style.STROKE);
mPaint.setStrokeWidth(2.0f);
}
@Override
public void init(View view, Camera camera) {
synchronized (lockGui) {
this.view = view;
Camera.Size size = camera.getParameters().getPreviewSize();
outputWidth = size.width;
outputHeight = size.height;
declareImages(size.width,size.height);
}
// start the thread for processing
running = true;
start();
}
@Override
public void onDraw(Canvas canvas) {
synchronized (lockGui) {
// the process class could have been swapped
if( image == null )
return;
int w = view.getWidth();
int h = view.getHeight();
// fill the window and center it
double scaleX = w/(double)outputWidth;
double scaleY = h/(double)outputHeight;
scale = Math.min(scaleX,scaleY);
tranX = (w-scale*outputWidth)/2;
tranY = (h-scale*outputHeight)/2;
canvas.translate((float)tranX,(float)tranY);
canvas.scale((float)scale,(float)scale);
render(canvas, scale);
}
}
@Override
public void convertPreview(byte[] bytes, Camera camera) {
if( thread == null )
return;
synchronized ( lockConvert ) {
ConvertUtils.nv21ToGray(bytes, image.width, image.height, image);
}
// wake up the thread and tell it to do some processing
thread.interrupt();
}
@Override
public void stopProcessing() {
if( thread == null )
return;
requestStop = true;
while( running ) {
// wake the thread up if needed
thread.interrupt();
try {
Thread.sleep(10);
} catch (InterruptedException e) {}
}
}
@Override
public void run() {
thread = Thread.currentThread();
while( !requestStop ) {
synchronized ( thread ) {
try {
wait();
if( requestStop )
break;
} catch (InterruptedException e) {}
}
// swap gray buffers so that convertPreview is modifying the copy which is not in use
synchronized ( lockConvert ) {
GrayU8 tmp = image;
image = image2;
image2 = tmp;
}
process(image2);
view.postInvalidate();
}
running = false;
}
/**
* Scaling applied to the drawing canvas
*/
public double getScale() {
return scale;
}
/**
* Translation x applied to the drawing canvas
*/
public double getTranX() {
return tranX;
}
/**
* Translation y applied to the drawing canvas
*/
public double getTranY() {
return tranY;
}
public void process(GrayU8 gray) {
if( detector != null ) {
List<LineParametric2D_F32> found = detector.detect(gray);
synchronized ( lockGui ) {
ConvertUtils.grayToBitmap(gray,bitmap,storage);
lines.reset();
for( LineParametric2D_F32 p : found ) {
LineSegment2D_F32 ls = ConvertUtils.convert(p, gray.width,gray.height);
lines.grow().set(ls.a,ls.b);
}
}
}
}
protected void render(Canvas canvas, double imageToOutput) {
canvas.drawBitmap(bitmap,0,0,null);
for( LineSegment2D_F32 s : lines.toList() ) {
canvas.drawLine(s.a.x,s.a.y,s.b.x,s.b.y,mPaint);
}
}
protected void declareImages( int width , int height ) {
image = imageType.createImage(width, height);
image2 = imageType.createImage(width, height);
bitmap = Bitmap.createBitmap(width,height,Bitmap.Config.ARGB_8888);
storage = ConvertUtils.declareStorage(bitmap,storage);
}
}
我扩展的课程是VideoProcessing.java
有没有人有这个问题的经验?
答案 0 :(得分:2)
解决方案是将渲染功能更改为以下内容:
protected void render(Canvas canvas, double imageToOutput) {
canvas.rotate(90, 640f/2, 480f/2);
canvas.scale(480f/640f, 640f/480f, 640f/2, 480f/2);
canvas.drawBitmap(bitmap,0,0,null);
for( LineSegment2D_F32 s : lines.toList() ) {
canvas.drawLine(s.a.x,s.a.y,s.b.x,s.b.y,mPaint);
}
}
我认为这不是以前的干净方式,但它实际上是唯一的工作方式......