我对OpenCV完全陌生。为了练习,我决定做一个“数独解算器”。到目前为止,我能够做到这一点:
public Mat processImage(final Mat originalImage, final CvCameraViewFrame frame) {
image = originalImage.clone();
image = frame.gray();
/*
We load the image in grayscale mode. We don't want to bother with the colour information,
so just skip it. Next, we create a blank image of the same size. This image will hold
the actual outer box of puzzle.
*/
Mat outerBox = new Mat(image.size(), CV_8UC1);
/*
Blur the image a little. This smooths out the noise a bit and makes extracting the grid
lines easier.
*/
GaussianBlur(image, image, new Size(11, 11), 0);
/*
With the noise smoothed out, we can now threshold the image. The image can have varying
illumination levels, so a good choice for a thresholding algorithm would be an adaptive
threshold. It calculates a threshold level several small windows in the image.
This threshold level is calculated using the mean level in the window. So it keeps things
illumination independent.
It calculates a mean over a 5x5 window and subtracts 2 from the mean.
This is the threshold level for every pixel.
*/
adaptiveThreshold(image, outerBox, 255, ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 5, 2);
/*
Since we're interested in the borders, and they are black, we invert the image outerBox.
Then, the borders of the puzzles are white (along with other noise).
*/
bitwise_not(outerBox, outerBox);
/*
This thresholding operation can disconnect certain connected parts (like lines).
So dilating the image once will fill up any small "cracks" that might have crept in.
*/
Mat kernel = new Mat(3,3, outerBox.type()) {
{
put(0,0,0);
put(0,1,1);
put(0,2,0);
put(1,0,1);
put(1,1,1);
put(1,2,1);
put(2,0,0);
put(2,1,1);
put(2,2,0);
}
};
dilate(outerBox, outerBox, kernel);
final List<MatOfPoint> contours = new ArrayList<>();
findContours(outerBox, contours, new Mat(outerBox.size(), outerBox.type()), CV_SHAPE_RECT, CHAIN_APPROX_SIMPLE);
final Integer biggestPolygonIndex = getBiggestPolygonIndex(contours);
if (biggestPolygonIndex != null) {
setGreenFrame(contours, biggestPolygonIndex, originalImage);
return originalImage;
}
return outerBox;
}
最终看起来像这样
因此,绿色区域内的所有内容都是我的难题。我的问题是如何提取它并对其进行一些数字识别。
因此,在我看来,第一个合乎逻辑的步骤将是削减这一领域。但是我不知道如何获得它。那么如何获得绿色轮廓的角呢?
欢迎任何帮助/提示。
答案 0 :(得分:0)
经过一番尝试,我得以解决
final List<MatOfPoint> contours = new ArrayList<>();
findContours(outerBox, contours, new Mat(outerBox.size(), outerBox.type()), CV_SHAPE_RECT, CHAIN_APPROX_SIMPLE);
final Integer biggestPolygonIndex = getBiggestPolygonIndex(contours);
if (biggestPolygonIndex != null) {
final MatOfPoint biggest = contours.get(biggestPolygonIndex);
List<Point> corners = getCornersFromPoints(biggest.toList());
System.out.println("corner size " + corners.size());
for (Point corner : corners) {
drawMarker(originalImage, corner, new Scalar(0,191,255), 0, 20, 3);
}
setGreenFrame(contours, biggestPolygonIndex, originalImage);
}
private List<Point> getCornersFromPoints(final List<Point> points) {
double minX = 0;
double minY = 0;
double maxX = 0;
double maxY = 0;
for (Point point : points) {
double x = point.x;
double y = point.y;
if (minX == 0 || x < minX) {
minX = x;
}
if (minY == 0 || y < minY) {
minY = y;
}
if (maxX == 0 || x > maxX) {
maxX = x;
}
if (maxY == 0 || y > maxY) {
maxY = y;
}
}
List<Point> corners = new ArrayList<>(4);
corners.add(new Point(minX, minY));
corners.add(new Point(minX, maxY));
corners.add(new Point(maxX, minY));
corners.add(new Point(maxX, maxY));
return corners;
}
private Integer getBiggestPolygonIndex(final List<MatOfPoint> contours) {
double maxVal = 0;
Integer maxValIdx = null;
for (int contourIdx = 0; contourIdx < contours.size(); contourIdx++) {
double contourArea = contourArea(contours.get(contourIdx));
if (maxVal < contourArea) {
maxVal = contourArea;
maxValIdx = contourIdx;
}
}
return maxValIdx;
}
private void setGreenFrame(final List<MatOfPoint> contours,
final int biggestPolygonIndex,
Mat originalImage) {
drawContours(originalImage, contours, biggestPolygonIndex, new Scalar(124,252,0), 3);
}