我想用检测对象的x,y坐标计算深度距离。
在下图中,我在opencv中使用了背景减法来检测和跟踪已进入相机视图的新对象。我能够相对轻松地获得x,y坐标,但是用realsense sdk很难获得z深度。我有可能以任何方式将x,y坐标输入到realsense sdk并从中获取z深度吗?
我正在使用opencv python和realsense sdk 2作为参考。
Getting the z depth in the midpoint of the bounding box
import numpy as np
import cv2 as cv
import pyrealsense2 as rs
# Create a pipeline
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
#Start pipeline
profile = pipeline.start(config)
erodeKernel = cv.getStructuringElement(cv.MORPH_RECT, (5,5))
fgbg = cv.createBackgroundSubtractorMOG2()
while True:
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
colour_frame = frames.get_color_frame()
color_image = np.asanyarray(colour_frame.get_data())
depth_image = np.asanyarray(depth_frame.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap = cv.applyColorMap(cv.convertScaleAbs(depth_image, alpha=0.03), cv.COLORMAP_JET)
blur = cv.GaussianBlur(color_image,(5,5),0)
fgmask = fgbg.apply(blur)
im2, contours, hierarchy = cv.findContours(fgmask.copy(), cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
for c in contours:
if cv.contourArea(c) < 200:
continue
(x,y,w,h) = cv.boundingRect(c)
cv.rectangle(color_image, (x,y), (x + w, y + h), (0,255,0), 2)
cv.imshow('RealSense', color_image)
cv.imshow("Depth", depth_colormap)
cv.imshow('Mask', fgmask)
if cv.waitKey(25) == ord('q'):
break
cv.destroyAllWindows()
pipeline.stop()
答案 0 :(得分:1)
看来这是一个相当简单的解决方案。在探究c ++示例之后,realsense sdk提供了一个称为get_distance(x,y)的函数,它将根据x,y坐标返回深度距离。
请注意,python中的此函数完全相同,但必须从深度框架中调用,并且x和y必须强制转换为整数
pipeline = rs.pipeline()
config = rs.config()
profile = pipeline.start(config)
frames = pipeline.wait_for_frames()
while True:
depth_frame = frames.get_depth_frame()
zDepth = depth_frame.get_distance(int(x),int(y))