问题:
目标是为两个平行摄像机创建视差图。当前计算本身正在运行,并且我有一个实时视差图。它只是显示轮廓而不是每个像素的信息,这不是视差图应该做的。
。
我尝试过的事情:
我尝试了tsuka示例,这些行被注释掉了,但是它们起作用了。因此,这证明所使用的功能可以正常工作。
我的代码结果在这里:https://imgur.com/a/bIDmdkk(我可能不具有上传图像所需的声誉) 从该图像中可以看到,我的脸只有轮廓即轮廓。该轮廓会根据我的实际距离做出反应-随着变亮或变暗-但其余图像变暗。
在所有参数都被注释掉的情况下(如示例所示),它现在也可以工作,但是上面有很多斑点。
我还尝试了numDisparities和blocksize的几乎任何组合。
将摄像机的位置相互更改会改变结果,但不会大幅度改变。我确保它们彼此平行,并排看。
编辑:我进行了一些修改,得到了以下结果:https://imgur.com/a/m2o9FOE与以前的结果相比,具有更多的功能,但也有更多的噪音。 (这有更少的差异和另一种颜色转换)
已解决:[我尝试使用BGR-Images在while循环中运行stereo.compute,但这不起作用。 tsuka示例图像是彩色的,因此可能有些情况下我看不到错误的数据类型。 当前一切都为uint8。] =>我忘记了imread(“”,0)将图像读取为灰度图像。因此,在这方面,一切都会按预期运行。
。
那么我的左/右图像与产生https://docs.opencv.org/master/disparity_map.jpg的图像之间有什么区别?
。
代码:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
cap1 = cv.VideoCapture(1)
cap3 = cv.VideoCapture(3)
#imgR = cv.imread('tsuL.png',0)
#imgL = cv.imread('tsuR.png',0)
#stereoTest = cv.StereoBM_create(numDisparities=16, blockSize=15)
#disparityTest = stereoTest.compute(imgL,imgR)
while True:
# save current camera image
ret1, frame1 = cap1.read()
ret3, frame3 = cap3.read()
# switch from BGR to gray
grayFrame1 = cv.cvtColor(frame1, cv.COLOR_BGR2GRAY)
grayFrame3 = cv.cvtColor(frame3, cv.COLOR_BGR2GRAY)
# disparity params
stereo = cv.StereoBM_create(numDisparities=128, blockSize=5)
stereo.setTextureThreshold(600)
#stereo.setSpeckleRange(4)
#stereo.setSpeckleWindowSize(9)
stereo.setMinDisparity(0)
# calculate both variants (Camera 1 Left, Camera 2 Right and Camera 1 right, Camera 2 left)
disparity = stereo.compute(grayFrame1,grayFrame3)
disparity2 = stereo.compute(grayFrame3,grayFrame1)
#res = cv.cvtColor(disparity,cv.COLOR_GRAY2BGR)
# Should have been 65535 from int16 to int8, but 4095 works..
div = 65535.0/16
res = cv.convertScaleAbs(disparity, alpha=(255.0/div))
res2= cv.convertScaleAbs(disparity2, alpha=(255.0/div))
# Show disparity map
cv.namedWindow("Disparity")
cv.moveWindow("Disparity", 450, 20)
cv.imshow('Disparity', np.hstack([res,res2]))
keyboard = cv.waitKey(30)
if keyboard == 'q' or keyboard == 27:
break
cap.release()
cv.destroyAllWindows()
新代码
我从boofcv
获取了摄像机校准数据,并将https://stackoverflow.com/a/29151300/13150965的一些行复制到了我的代码中。
Schwarz S/W
Xc 311,0 323,3
Yc 257,1 261,9
fx 603,0 593,6
fy 604,3 596,5
skew
radial 1,43e-01 1,1e-01
-3,03e-01 -2,43e-01
tangential 1,37e-02 1,25e-02
-9,77e-03 -9,79e-04
这些是我为每个摄像机接收的值(Schwarz
和S/W
只是每个摄像机的名称,它们具有不同的电缆,这就是我识别它们的方式)
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
cap1 = cv.VideoCapture(0)
cap3 = cv.VideoCapture(1)
cameraMatrix1 = np.array(
[[603.0, 0, 311.0],
[0, 604.3, 257.1],
[0, 0, 1]]
)
cameraMatrix2 = np.array(
[[593.6, 0, 323.3],
[0, 596.5, 261.9],
[0, 0, 1]]
)
distCoeffs1 = np.array([[0.143, -0.303, 0.0137, -0.00977, 0.0]])
distCoeffs2 = np.array([[0.11, -0.243, 0.0125, -0.000979, 0.0]])
R = np.array(
[[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]]
)
T = np.array(
[[98.0],
[0.0],
[0.0]]
)
# Params from camera calibration
camMats = [cameraMatrix1, cameraMatrix2]
distCoeffs = [distCoeffs1, distCoeffs2]
camSources = [0,1]
for src in camSources:
distCoeffs[src][0][4] = 0.0 # use only the first 2 values in distCoeffs
xOff = 450
div = 64.0
i = 0
while True:
# save current camera image
ret1, frame1 = cap1.read()
ret3, frame3 = cap3.read()
w, h = frame1.shape[:2]
# The rectification process
newCams = [0,0]
roi = [0,0]
frames = [frame1, frame3]
i = i + 1
if i > 10:
for src in camSources:
newCams[src], roi[src] = cv.getOptimalNewCameraMatrix(cameraMatrix = camMats[src],
distCoeffs = distCoeffs[src],
imageSize = (w,h),
alpha = 0)
rectFrames = [0,0]
for src in camSources:
rectFrames[src] = cv.undistort(frames[src], camMats[src], distCoeffs[src])
R1,R2,P1,P2,Q,roi1,roi2 = cv.stereoRectify(
cameraMatrix1 =camMats[0],
cameraMatrix2 =camMats[1],
distCoeffs1 =distCoeffs1,
distCoeffs2 =distCoeffs2,
imageSize = (w,h),
R=R,
T=T,
alpha=1
)
# show camera images
cv.namedWindow("RectFrames")
cv.moveWindow("RectFrames", xOff, 532)
cv.imshow('RectFrames', np.hstack([rectFrames[0],rectFrames[1]]))
# switch from BGR to gray
grayFrame1 = cv.cvtColor(rectFrames[0], cv.COLOR_BGR2GRAY)
grayFrame3 = cv.cvtColor(rectFrames[1], cv.COLOR_BGR2GRAY)
# disparity params
stereo = cv.StereoBM_create(numDisparities=16, blockSize=15)
# calculate both variants (Camera 1 Left, Camera 2 Right and Camera 1 right, Camera 2 left)
disparity = stereo.compute(grayFrame1,grayFrame3)
disparity2 = stereo.compute(grayFrame3,grayFrame1)
# Should have been 65535 from int16 to int8, but 4095 works..
res = cv.convertScaleAbs(disparity, alpha=(255.0/(div-1)))
res2= cv.convertScaleAbs(disparity2, alpha=(255.0/(div-1)))
# Show disparity map
cv.namedWindow("Disparity")
cv.moveWindow("Disparity", xOff, 20)
cv.imshow('Disparity', np.hstack([res,res2]))
keyboard = cv.waitKey(30)
if keyboard == 'q' or keyboard == 27:
break
cap.release()
cv.destroyAllWindows()
我可以看到图像没有失真。 https://imgur.com/a/SBmv7IY
但是我还是做错了。
R
和T
组成,因为它们看起来是平行的(无旋转),并且彼此相距9.8厘米。
通过StereoCalibration in OpenCV on Python中的脚本计算出的R
和T
的值导致R
的单位矩阵和T
的空向量。后者不正确。
对于给定的摄像机校准,我现在得到了R
和T
值。但这实际上并不能解决我的问题。因此,要么在该计算中仍然存在错误,要么必须以其他方式解决此问题。
我重新编写了整个脚本,以查看它在哪一步中表现不佳-并整理工作。在看台上,校准工作到cv2.initUndistortRectifyMap
为止,如果我将此图和cv2.remap
一起使用到相机图像上,则会得到黑色图像。
import numpy as np
import cv2
from VideoCapture import Device
from PIL import Image
import glob
print("Importing Images")
image_listR = []
image_listL = []
w = 640
h = 480
for filename in glob.glob('StereoCalibrate\imageR*'): #assuming gif
im=Image.open(filename).convert('RGB')
cvim= np.array(im)
cvim = cvim[:, :, ::-1].copy()
image_listR.append(cvim)
for filename in glob.glob('StereoCalibrate\imageL*'): #assuming gif
im=Image.open(filename).convert('RGB')
cvim= np.array(im)
cvim = cvim[:, :, ::-1].copy()
image_listL.append(cvim)
imagesR = len(image_listR)
imagesL = len(image_listL)
print("Found {%d} images for Left camera" % imagesL)
print("Found {%d} images for Right camera" % imagesR)
if imagesR == imagesL:
print("Number of Images match")
else:
print("Number of Images do not match")
print("Using loaded images")
board_w = 8
board_h = 5
board_sz = (8,5)
board_n = board_w*board_h
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Arrays to store object points and image points from all the images.
object_points = [] # 3d point in real world space
imagePoints1 = [] # 2d points in image plane.
imagePoints2 = [] # 2d points in image plane.
corners1 = []
corners2 = []
obj = np.zeros((5*8,3), np.float32)
obj[:,:2] = np.mgrid[0:8,0:5].T.reshape(-1,2)
vidStreamL = cv2.VideoCapture(1) # index of your camera
vidStreamR = cv2.VideoCapture(0) # index of your camera
success = 0
found1 = False
found2 = False
i=0
while (success < imagesR*0.9):
#Loop through the image list
if i >= imagesL:
i = 0
img1 = image_listL[i]
img2 = image_listR[i]
#Convert images to grayscale
gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
#Check for Chessboard Pattern
found1, corners1 = cv2.findChessboardCorners(img1, board_sz)
found2, corners2 = cv2.findChessboardCorners(img2, board_sz)
#Draw Chessboard in image
if (found1):
cv2.cornerSubPix(gray1, corners1, (11, 11), (-1, -1),criteria)
cv2.drawChessboardCorners(gray1, board_sz, corners1, found1)
if (found2):
cv2.cornerSubPix(gray2, corners2, (11, 11), (-1, -1), criteria)
cv2.drawChessboardCorners(gray2, board_sz, corners2, found2)
#Show grayscale image with chessboard marker
cv2.imshow('image1', gray1)
cv2.imshow('image2', gray2)
if (found1 != 0 and found2 != 0):
#Remove successful detected images from list
image_listL.pop(i)
image_listR.pop(i)
imagesL-=1
imagePoints1.append(corners1);
imagePoints2.append(corners2);
object_points.append(obj);
success+=1
print("{", success, "} / {",imagesR*0.9,"} calibration images detected")
if (success >= imagesR*0.9):
break
i = i + 1
cv2.waitKey(1)
cv2.destroyAllWindows()
print("Calibrating")
cx1 = 327.0
cy1 = 247.9
fx1 = 608.3
fy1 = 607.7
rx1 = 0.129
ry1 = -0.269
tx1 = 0.00382
ty1 = -0.00151
camMat1 = np.array(
[[fx1, 0, cx1],
[0, fy1, cy1],
[0, 0, 1]])
cx2 = 329.8
cy2 = 249.0
fx2 = 601.7
fy2 = 601.1
rx2 = 0.149
ry2 = -0.322
tx2 = 0.0039
ty2 = -0.000837
camMat2 = np.array(
[[fx2, 0, cx2],
[0, fy2, cy2],
[0, 0, 1]])
disCoe1 = np.array([[0.0,0.0,0.0,0.0,0.0]])
disCoe2 = np.array([[0.0,0.0,0.0,0.0,0.0]])
R = np.zeros(shape=(3,3))
T = np.zeros(shape=(3,3))
E = np.zeros(shape=(3,3))
F = np.zeros(shape=(3,3))
retval, camMat1, disCoe1, camMat2, disCoe2, R, T, E, F = cv2.stereoCalibrate(object_points, imagePoints1, imagePoints2, camMat1, disCoe1, camMat2, disCoe2, (w, h), flags = cv2.CALIB_USE_INTRINSIC_GUESS)
print("Done Calibration\n")
R1 = np.zeros(shape=(3,3))
R2 = np.zeros(shape=(3,3))
P1 = np.zeros(shape=(3,4))
P2 = np.zeros(shape=(3,4))
print("T:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in T]))
print("E:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in E]))
print("F:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in F]))
print("R:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in R]))
print("CAM1:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in camMat1]))
print("CAM2:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in camMat2]))
print("DIS1:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in disCoe1]))
print("DIS2:")
print('\n'.join([' '.join(['{:4}'.format(item) for item in row])
for row in disCoe2]))
print("Rectifying cameras")
cv2.stereoRectify(camMat1, disCoe1, camMat2, disCoe2,(w, h), R, T)
#print("Undistort image")
#map1x, map1y = cv2.initUndistortRectifyMap(camMat1, disCoe1, R1, camMat1, (w, h), cv2.CV_32FC1)
#map2x, map2y = cv2.initUndistortRectifyMap(camMat2, disCoe2, R2, camMat2, (w, h), cv2.CV_32FC1)
print("Settings complete\n")
i = 1
j = 1
while(True):
retL, img1 = vidStreamL.read()
retR, img2 = vidStreamR.read()
img1 = cv2.undistort(img1, camMat1, disCoe1)
img2 = cv2.undistort(img2, camMat2, disCoe2)
cv2.imshow("ImgCam", np.hstack([img1,img2]));
#imgU1 = np.zeros((h,w,3), np.uint8)
#imgU2 = np.zeros((h,w,3), np.uint8)
#imgU1 = cv2.remap(img1, map1x, map1y, cv2.INTER_LINEAR, imgU1, cv2.BORDER_CONSTANT, 0)
#imgU2 = cv2.remap(img2, map2x, map2y, cv2.INTER_LINEAR, imgU2, cv2.BORDER_CONSTANT, 0)
#cv2.imshow("ImageCam", np.hstack([imgU1,imgU2]));
#imgU1 = cv2.cvtColor(imgU1, cv2.COLOR_BGR2GRAY)
#imgU2 = cv2.cvtColor(imgU2, cv2.COLOR_BGR2GRAY)
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(img1,img2)
disparit2 = stereo.compute(img2,img1)
res = cv2.convertScaleAbs(disparity, alpha=(255.0/512.0))
re2 = cv2.convertScaleAbs(disparit2, alpha=(255.0/512.0))
cv2.namedWindow("Disparity")
cv2.imshow('Disparity', np.hstack([res,re2]))
cv2.waitKey(1)
输出:
Importing Images
Found {90} images for Left camera
Found {90} images for Right camera
Number of Images match
Using loaded images
{ 1 } / { 81.0 } calibration images detected
{ 2 } / { 81.0 } calibration images detected
...
{ 81 } / { 81.0 } calibration images detected
Calibrating
Done Calibration
T:
-3.4549164747952514
-0.15507627811210184
-0.058176064658149625
E:
0.0009397723130476023 0.05762864132890782 -0.15527769659160615
-0.01780225919479015 0.01349075458635349 3.455334047732434
-0.008356129824974412 -3.458367965240172 0.010848591597549652
F:
3.59441069386539e-08 2.1966757991956236e-06 -0.0032581679670958268
-6.799554333159719e-07 5.135279707045414e-07 0.060534502577423176
6.856712419870922e-06 -0.061575681061419536 1.0
R:
0.9988149170858261 -0.0472903202575948 -0.01150595570860947
0.047251107481307925 0.998876350140538 -0.0036564971909233096
0.011665943966274269 0.0031084947887139625 0.9999271188499311
CAM1:
457.8949692862012 0.0 333.02411929079784
0.0 459.45537763505865 239.7961684844508
0.0 0.0 1.0
CAM2:
460.4374113961873 0.0 342.68117331116434
0.0 461.07367491328057 244.62051778708334
0.0 0.0 1.0
DIS1:
0.06391854958023913 -0.2191286122082927 -0.000947168228999159 0.004660285089171575 0.08044318478168837
DIS2:
0.011643796283126952 0.14239490114798584 0.001548517080560543 0.011862118627062223 -0.5191998209097282
Rectifying cameras
Settings complete
答案 0 :(得分:0)
您错过了校准和整流过程,这是视差算法的第一步。
以下步骤可帮助您获取视差图:
注意:在没有纹理的区域中原始视差图会很差。