因此,此代码能够将其标识的各种房间划分为不同的颜色,如下所示。问题是,我如何获得彩色房间的面积(就像那些蓝色房间一样)。客房的比例为1m:150m。
第一幅图像是我需要测量的输出,第二幅空间是我用来运行代码的图像,第三幅图像是供参考的原始图像。预先感谢。
import numpy as np
def find_rooms(img, noise_reduction=10, corners_threshold=0.0000001,
room_close=2, gap_in_wall_threshold=0.000001):
# :param img: grey scale image of rooms, already eroded and doors removed etc.
# :param noise_reduction: Amount of noise removed.
# :param corners_threshold: Corners to retained, higher value = more of house removed.
# :param room_close: Maximum line length to add to close off open doors.
# :param gap_in_wall_threshold: Minimum number of pixels to identify component as room instead of hole in the wall.
# :return: rooms: list of numpy arrays containing boolean masks for each detected room
# colored_house: Give room a color.
assert 0 <= corners_threshold <= 1
# Remove noise left from door removal
img[img < 128] = 0
img[img > 128] = 255
contours, _ = cv2.findContours(~img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
mask = np.zeros_like(img)
for contour in contours:
area = cv2.contourArea(contour)
if area > noise_reduction:
cv2.fillPoly(mask, [contour], 255)
img = ~mask
# Detect corners (you can play with the parameters here)
#harris corner detection
dst = cv2.cornerHarris(img, 4,3,0.000001)
dst = cv2.dilate(dst,None)
corners = dst > corners_threshold * dst.max()
# Draw lines to close the rooms off by adding a line between corners on the same x or y coordinate
# This gets some false positives.
# Can try disallowing drawing through other existing lines, need to test.
for y,row in enumerate(corners):
x_same_y = np.argwhere(row)
for x1, x2 in zip(x_same_y[:-1], x_same_y[1:]):
if x2[0] - x1[0] < room_close:
color = 0
cv2.line(img, (x1, y), (x2, y), color, 1)
for x,col in enumerate(corners.T):
y_same_x = np.argwhere(col)
for y1, y2 in zip(y_same_x[:-1], y_same_x[1:]):
if y2[0] - y1[0] < room_close:
color = 0
cv2.line(img, (x, y1), (x, y2), color, 1)
# Mark the outside of the house as black
contours, _ = cv2.findContours(~img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
biggest_contour = max(contour_sizes, key=lambda x: x[0])[1]
mask = np.zeros_like(mask)
cv2.fillPoly(mask, [biggest_contour], 255)
img[mask == 0] = 0
# Find the connected components in the house
ret, labels = cv2.connectedComponents(img)
img = cv2.cvtColor(img,cv2.COLOR_GRAY2RGB)
unique = np.unique(labels)
rooms = []
for label in unique:
component = labels == label
if img[component].sum() == 0 or np.count_nonzero(component) < gap_in_wall_threshold:
color = 0
else:
rooms.append(component)
color = np.random.randint(0, 255, size=3)
img[component] = color
return rooms, img
#Read gray image
img = cv2.imread('output16.png', 0)
rooms, colored_house = find_rooms(img.copy())
cv2.imshow('result', colored_house)
cv2.waitKey()
cv2.destroyAllWindows()
答案 0 :(得分:3)
好的,假设您使用OpenCV读取了分割后的图片:
import cv2
import numpy as np
# reading the segmented picture in coloured mode
image = cv2.imread("path/to/segmented/coloured/picture.jpg", cv2.IMREAD_COLOR)
现在,假设您知道整张图片的大小(以平方米为单位),那么例如,如果某张图片的总尺寸为150m x 70m,则总尺寸为150x70 =10500m²。让我们将其声明为变量:
total_size = 10500
您还想知道图片中的像素总数。例如,如果您的图片为750 * 350像素,则您有:262500像素。您可以执行以下操作:
total_number_of_pixels = image.shape[0]*image.shape[1]
现在,正如我在评论中说的那样,您还想知道分割后的图片中每种唯一颜色的像素数,您可以使用以下方法完成
:# count all occurrences of unique colours in your picture
unique, counts = np.unique(image.reshape(-1, image.shape[2]), axis=0, return_counts=True)
coloured_pixel_counts = sorted(zip(unique, counts), key=lambda x: x[1]))
现在,您所要做的只是交叉乘法,可以使用以下方法完成此操作:
rooms = []
for colour, pixel_count in coloured_pixel_counts:
rooms.append((colour, (pixel_count/total_number_of_pixels)*total_size))
您现在应该拥有所有颜色的列表,以及每种颜色的房间的近似大小(以平方米为单位)。
现在,请注意,但是,您可能必须将此列表子集化为您感兴趣的颜色,因为某些颜色似乎并没有真正链接到分段图片中的房间...
再次,请问是否有任何不清楚的地方!
答案 1 :(得分:-1)
因此,测量将基于像素,并且您将需要知道要“测量”的颜色的RGB值的最大和最小范围。我在您的图片上运行此代码,以查找绿色区域占房子整个区域的百分比,并且得到以下结果:
过滤后的像素数为:331213,占房屋的%5
import cv2
import numpy as np
import math
img = cv2.imread('22I7X.png')
#Defining wanted color range
filteredColorMin = np.array([36,0,0], np.uint8) #Min range
filteredColorMax = np.array([70, 255,255], np.uint8) #High range
#Find all the pixels in the wanted color range
dst = cv2.inRange(img, filteredColorMin, filteredColorMax)
#count non-zero values from filtered range
numFilteredColor = cv2.countNonZero(dst)
#Getting total number of pixels in image to get the percentage of the filtered pixels from the total pixels
numTotalPixels=img.shape[0] *img.shape[1]
print('The number of filtered pixels is: ' + str(numFilteredColor) + " Which counts for %" + str(math.ceil((numFilteredColor/numTotalPixels)*100)) + " of the house")
cv2.imshow("original image",img)
cv2.waitKey(0)