这是我的情况。它涉及对齐扫描图像,这将导致不正确的扫描。我必须将扫描的图像与我的Java程序对齐。
这些是更多细节:
你知道,扫描图像的人可能不一定将图像放在一个完全正确的位置,所以我需要我的程序在加载时自动对齐扫描图像。该程序可以在许多此类扫描图像上重复使用,因此我需要以这种方式灵活地使用该程序。
我的问题是以下之一:
如何使用Java检测表格上边缘的y坐标和表格最左边缘的x坐标。桌子是一张普通的桌子,有许多单元格,黑色细边框,印在白纸上(水平打印输出)
如果存在一种更简单的方法来自动对齐扫描图像,使得所有扫描图像都将图形表对齐到相同的x,y坐标,则共享此方法:)。
如果您不知道上述问题的答案,请告诉我应该从哪里开始。我不太了解图形java编程,我有大约1个月的时间来完成这个程序。假设我的日程安排很紧,我必须让图形部分尽可能简单。
干杯谢谢。
答案 0 :(得分:4)
尝试从简单的场景开始,然后改进方法。
本文末尾提供的程序执行步骤1到3.它是使用Marvin Framework实现的。下图显示了检测到的角落的输出图像。
该程序还输出:旋转角度:1.6365770416167182
源代码:
import java.awt.Color;
import java.awt.Point;
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinAttributes;
import marvin.util.MarvinPluginLoader;
public class FormCorners {
public FormCorners(){
// Load plug-in
MarvinImagePlugin moravec = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.corner.moravec");
MarvinAttributes attr = new MarvinAttributes();
// Load image
MarvinImage image = MarvinImageIO.loadImage("./res/printedForm.jpg");
// Process and save output image
moravec.setAttribute("threshold", 2000);
moravec.process(image, null, attr);
Point[] boundaries = boundaries(attr);
image = showCorners(image, boundaries, 12);
MarvinImageIO.saveImage(image, "./res/printedForm_output.jpg");
// Print rotation angle
double angle = (Math.atan2((boundaries[1].y*-1)-(boundaries[0].y*-1),boundaries[1].x-boundaries[0].x) * 180 / Math.PI);
angle = angle >= 0 ? angle : angle + 360;
System.out.println("Rotation angle:"+angle);
}
private Point[] boundaries(MarvinAttributes attr){
Point upLeft = new Point(-1,-1);
Point upRight = new Point(-1,-1);
Point bottomLeft = new Point(-1,-1);
Point bottomRight = new Point(-1,-1);
double ulDistance=9999,blDistance=9999,urDistance=9999,brDistance=9999;
double tempDistance=-1;
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
if(cornernessMap[x][y] > 0){
if((tempDistance = Point.distance(x, y, 0, 0)) < ulDistance){
upLeft.x = x; upLeft.y = y;
ulDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, 0)) < urDistance){
upRight.x = x; upRight.y = y;
urDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, 0, cornernessMap[0].length)) < blDistance){
bottomLeft.x = x; bottomLeft.y = y;
blDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, cornernessMap[0].length)) < brDistance){
bottomRight.x = x; bottomRight.y = y;
brDistance = tempDistance;
}
}
}
}
return new Point[]{upLeft, upRight, bottomRight, bottomLeft};
}
private MarvinImage showCorners(MarvinImage image, Point[] points, int rectSize){
MarvinImage ret = image.clone();
for(Point p:points){
ret.fillRect(p.x-(rectSize/2), p.y-(rectSize/2), rectSize, rectSize, Color.red);
}
return ret;
}
public static void main(String[] args) {
new FormCorners();
}
}
答案 1 :(得分:2)
边缘检测通常通过增强相邻像素之间的对比度来完成,这样您就可以获得易于检测的线条,适合进一步处理。
要做到这一点,“内核”会根据像素的初始值和像素的邻居值来转换像素。良好的边缘检测内核将增强相邻像素之间的差异,并降低具有相似内容的像素的强度。
我首先看一下Sobel运营商。这可能不会返回对您有用的结果;但是,如果你在对该领域知之甚少的情况下解决问题,它会比你更接近。
在你有一些清晰的干净边缘之后,你可以使用更大的内核来检测两条线中出现90%弯曲的点,这可能会给你外部矩形的像素坐标,这可能足够你的目的。
使用这些外部坐标,使新像素与旋转并移动到“匹配”的旧像素之间的平均值进行堆肥仍然是一种数学算法。结果(特别是如果您不了解抗锯齿数学)可能非常糟糕,为图像添加模糊。
锐化过滤器可能是一种解决方案,但它们有自己的问题,主要是通过添加颗粒度使图像更清晰。太多了,很明显原始图像不是高质量的扫描。
答案 2 :(得分:1)
我过去做过的一个类似的问题基本上是找出了表单的方向,重新对齐它,重新缩放它,然后我就完成了。您可以使用Hough变换来检测图像的角度偏移(即:旋转多少),但您仍需要检测表单的边界。它还必须适应纸张本身的边界。
这对我来说是一个幸运的休息时间,因为它基本上在黑色大边框的中间显示了黑白图像。
您的图片现在已经很好地对齐了。我这样做是为了规范整个人类大脑的MRI扫描方向。
您还知道实际图像周围有一个巨大的黑色边框。您只需从图像的顶部和底部以及两侧删除行,直到它们全部消失为止。到目前为止,您可以暂时将7x7中值或模式滤镜应用于图像的副本。它有助于排除最终图像中剩余的边框与指纹,污垢等有关。
答案 3 :(得分:1)
我研究了这些库,但最后我发现编写自己的边缘检测方法更方便。
下面的类将检测包含此类边缘的扫描纸张的黑色/灰色边缘,并将从最右端开始返回纸张边缘的x和y坐标(反向=真)或从下端(反向=真)或从上边缘(反向=假)或从左边缘(反向=假)。此外...程序将采用沿像素测量的垂直边缘(rangex)范围和以像素为单位测量的水平范围(范围)。范围确定收到的点的异常值。
该程序使用指定的数组进行4次垂直切割,并进行4次水平切割。它检索暗点的值。它使用范围来消除异常值。有时,纸上的一点点可能会导致异常点。范围越小,异常值越少。但是,有时边缘略微倾斜,因此您不希望范围太小。
玩得开心。它对我来说很完美。
import java.awt.image.BufferedImage;
import java.awt.Color;
import java.util.ArrayList;
import java.lang.Math;
import java.awt.Point;
public class EdgeDetection {
public App ap;
public int[] horizontalCuts = {120, 220, 320, 420};
public int[] verticalCuts = {300, 350, 375, 400};
public void printEdgesTest(BufferedImage image, boolean reversex, boolean reversey, int rangex, int rangey){
int[] mx = horizontalCuts;
int[] my = verticalCuts;
//you are getting edge points here
//the "true" parameter indicates that it performs a cut starting at 0. (left edge)
int[] xEdges = getEdges(image, mx, reversex, true);
int edgex = getEdge(xEdges, rangex);
for(int x = 0; x < xEdges.length; x++){
System.out.println("EDGE = " + xEdges[x]);
}
System.out.println("THE EDGE = " + edgex);
//the "false" parameter indicates you are doing your cut starting at the end (image.getHeight)
//and ending at 0
//if the parameter was true, it would mean it would start the cuts at y = 0
int[] yEdges = getEdges(image, my, reversey, false);
int edgey = getEdge(yEdges, rangey);
for(int y = 0; y < yEdges.length; y++){
System.out.println("EDGE = " + yEdges[y]);
}
System.out.println("THE EDGE = " + edgey);
}
//This function takes an array of coordinates...detects outliers,
//and computes the average of non-outlier points.
public int getEdge(int[] edges, int range){
ArrayList<Integer> result = new ArrayList<Integer>();
boolean[] passes = new boolean[edges.length];
int[][] differences = new int[edges.length][edges.length-1];
//THIS CODE SEGMENT SAVES THE DIFFERENCES BETWEEN THE POINTS INTO AN ARRAY
for(int n = 0; n<edges.length; n++){
for(int m = 0; m<edges.length; m++){
if(m < n){
differences[n][m] = edges[n] - edges[m];
}else if(m > n){
differences[n][m-1] = edges[n] - edges[m];
}
}
}
//This array determines which points are outliers or nots (fall within range of other points)
for(int n = 0; n<edges.length; n++){
passes[n] = false;
for(int m = 0; m<edges.length-1; m++){
if(Math.abs(differences[n][m]) < range){
passes[n] = true;
System.out.println("EDGECHECK = TRUE" + n);
break;
}
}
}
//Create a new array only using valid points
for(int i = 0; i<edges.length; i++){
if(passes[i]){
result.add(edges[i]);
}
}
//Calculate the rounded mean... This will be the x/y coordinate of the edge
//Whether they are x or y values depends on the "reverse" variable used to calculate the edges array
int divisor = result.size();
int addend = 0;
double mean = 0;
for(Integer i : result){
addend += i;
}
mean = (double)addend/(double)divisor;
//returns the mean of the valid points: this is the x or y coordinate of your calculated edge.
if(mean - (int)mean >= .5){
System.out.println("MEAN " + mean);
return (int)mean+1;
}else{
System.out.println("MEAN " + mean);
return (int)mean;
}
}
//this function computes "dark" points, which include light gray, to detect edges.
//reverse - when true, starts counting from x = 0 or y = 0, and ends at image.getWidth or image.getHeight()
//verticalEdge - determines whether you want to detect a vertical edge, or a horizontal edge
//arr[] - determines the coordinates of the vertical or horizontal cuts you will do
//set the arr[] array according to the graphical layout of your scanned image
//image - this is the image you want to detect black/white edges of
public int[] getEdges(BufferedImage image, int[] arr, boolean reverse, boolean verticalEdge){
int red = 255;
int green = 255;
int blue = 255;
int[] result = new int[arr.length];
for(int n = 0; n<arr.length; n++){
for(int m = reverse ? (verticalEdge ? image.getWidth():image.getHeight())-1:0; reverse ? m>=0:m<(verticalEdge ? image.getWidth():image.getHeight());){
Color c = new Color(image.getRGB(verticalEdge ? m:arr[n], verticalEdge ? arr[n]:m));
red = c.getRed();
green = c.getGreen();
blue = c.getBlue();
//determine if the point is considered "dark" or not.
//modify the range if you want to only include really dark spots.
//occasionally, though, the edge might be blurred out, and light gray helps
if(red<239 && green<239 && blue<239){
result[n] = m;
break;
}
//count forwards or backwards depending on reverse variable
if(reverse){
m--;
}else{
m++;
}
}
}
return result;
}
}