对于3d摄影机校准方法的实现,我似乎无法恢复正确的旋转矩阵。我想我已经将其范围缩小到一个编码问题,以及我生成A矩阵以解决齐次方程Av = 0中的v的方式。
其中A = [xi * Xi xi * Yi xi * Zi xi -yi * Xi -yi * Yi -yi * Zi -yi],其中i = 1到N,并且N> = 7
和v =(r21 r22 r23 Ty -ar11 -ar12 -ar13 -aTx),这是我尝试恢复的未知值。
由于这是一个模拟,所以我可以访问实际值。当用这些已知的v值检查方程式Av时,我没有找到Av =0。因此,我怀疑我的问题就在这里。
这是专门用于生成A的相关代码。矩阵A是通过首先获取世界坐标中的每个点来构造的。系统(矩阵WC),找到它对应的图像坐标(x,y),然后将这些匹配的世界图像坐标乘以矩阵A:
cnt = 1;
% plane : Xw = 1
for i = 0.2 : 0.2 : 0.8
for j = 0.2 : 0.2 : 0.8
Pw(cnt,:) = [1 i j];
cnt = cnt + 1;
end
end
% plane : Yw = 1
for i=0.2:0.2:0.8
for j=0.2:0.2:0.8
Pw(cnt,:) = [i 1 j];
cnt = cnt + 1;
end
end
% Note: For brevity, I've omitted the code generating this,
% and provide only the values. The full code can be viewed in the last
% section at the bottom of this post.
R = [ 0.7600 -0.6428 0;
-0.3214 -0.3830 -0.8660;
-0.5567 -0.6634 -0.5000 ];
T = [0 0 4]';
Ox = 256;
Oy = 256;
Fx = 0.016/(0.0088/512.0);
Fy = 0.016/(0.0066/512.0);
% asr is the aspect ratio
asr = Fx/Fy;
% Generate Pc:
for i = 1:1:size(Pw,1)
Pc(i,:) = (R*Pw(i,:)' + T)';
end
X = Pc(:,1);
Y = Pc(:,2);
Z = Pc(:,3);
% image-pixel coordinates
p = [ (Ox-Fx.*(X./Z)) (Oy-Fy.*(Y./Z)) ];
x = p(:, 1);
y = p(:, 2);
% pad world coords. with a column of 1s [ X Y Z 1 ]
WC = [Pw ones(size(Pw,1), 1)];
A = [WC.*x -WC.*y];
当我在下面创建v_real变量时:
v_real = [ R(2,:) T(2) asr*R(1,:) asr*T(1) ]';
乘以我的A:
A*v_real
我没有得到0作为答案。我认为我编写的代码不正确吗?当我运行以下代码时,确实得到了预期的0:
[U,S,V] = svd(A);
v_bar = V(:,8);
A*v_bar;
对于某些可视化,这是我的原始点(红色)相对于恢复的点(蓝色)的结果。恢复的图像是 close ,但是由于这是一个模拟,我觉得应该没有零到零的错误,肯定没有像我在这里看到的那样。
下面包含(严格供进一步参考)是完整的代码实现,可恢复所有参数Fx,Fy,R和T。所有这些参数均与实际值完全不同。
%% PART 0: 3D calibration pattern:
% Pw holds 32 points on two surfaces (Xw = 1 and Yw = 1) of a cube
% Values are measured in meters.
%
% There are 4x4 uniformly distributed points on each surface.
cnt = 1;
% plane : Xw = 1
for i = 0.2 : 0.2 : 0.8
for j = 0.2 : 0.2 : 0.8
Pw(cnt,:) = [1 i j];
cnt = cnt + 1;
end
end
% plane : Yw = 1
for i=0.2:0.2:0.8
for j=0.2:0.2:0.8
Pw(cnt,:) = [i 1 j];
cnt = cnt + 1;
end
end
%% PART 1: Virtual Camera Model
% Extrinsic parameters : R = Ralpha*Rbeta*Rgamma
% Camera Position:
CameraRotation = [ 40 0 -120 ];
gamma = CameraRotation(1)*pi/180.0;
beta = CameraRotation(2)*pi/180.0;
alpha = CameraRotation(3)*pi/180.0;
Rgamma = [ [ cos(gamma) -sin(gamma) 0 ];
[ sin(gamma) cos(gamma) 0 ];
[ 0 0 1 ]; ];
Rbeta = [ [ cos(beta) 0 -sin(beta) ];
[ 0 1 0 ];
[ sin(beta) 0 cos(beta) ]; ];
Ralpha = [ [ 1 0 0 ];
[ 0 cos(alpha) -sin(alpha) ];
[ 0 sin(alpha) cos(alpha) ]; ];
R = Ralpha*Rbeta*Rgamma;
T = [0 0 4]'; % distance from camera.
% Intrinsic parameters
f = 0.016;
Ox = 256;
Oy = 256;
Sx = 0.0088/512.0;
Sy = 0.0066/512.0;
Fx = f/Sx;
Fy = f/Sy;
% asr is the aspect ratio
asr = Fx/Fy;
% Generate Pc:
for i = 1:1:size(Pw,1)
Pc(i,:) = (R*Pw(i,:)'+ T)';
end
X = Pc(:,1);
Y = Pc(:,2);
Z = Pc(:,3);
% image-pixel coordinates
p = [ (Ox-Fx.*(X./Z)) (Oy-Fy.*(Y./Z)) ];
x = p(:, 1);
y = p(:, 2);
%% Part 2: Direct Calibration Method:
% DIRECT PARAMETER METHOD:
% 1. Estimate image center & aspect ratio
% Note: Assumed the below variables are known:
Ox;
Oy;
asr;
% 2. Measure N 3d (world) coordinates
WC = [Pw ones(size(Pw,1), 1)];
% 3. Locate their cooresponding coordinates using edge detection or hough
% Transform
% NOTE: Again, assume these variables are known:
p; x; y;
% 4. Build Matrix A of a homogenous system A*v' = 0
A = [WC.*x -WC.*y];
% 5. Compute SVD of A, solution v
[U,S,V] = svd(A); % 10/30 - appears to be working...
v_bar = V(8,:); % NOTE: due to some confusion, I've tried both
% v_bar = V(:,8); % assignments given here. Neither works.
% 6. Determine Aspect ratio alpha and scale |gamma|
r_gamma = sqrt(v_bar(1)^2 + v_bar(2)^2 + v_bar(3)^2);
r_alpha = sqrt(v_bar(5)^2 + v_bar(6)^2 + v_bar(7)^2)/r_gamma;
% 7. Recover the first two rows of R and the first two components of T up
% to the same sign
r_Ty = v_bar(4)/r_gamma;
r_Tx = v_bar(8)/(r_gamma*r_alpha);
r_R = zeros(3,3);
r_R(2,:) = v_bar(1:3)/r_gamma;
r_R(1,:) = v_bar(5:7)/(r_alpha*r_gamma);
% 8. Determine sign s of gamma by checking the projection equation
r_X = r_R(1,1:3) * WC(1,1:3)' + r_Tx;
if sign(r_X) == sign(p(1,1))
r_R = -r_R;
end
% 9. Compute the 3rd row of R by vector product, enforce orthogonality
% constraints by SVD
r_R(3,:) = cross(r_R(1,:), r_R(2,:));
if r_R'*r_R ~= eye(3)
[U,S,V] = svd(r_R);
r_R = U*eye(3)*V';
end
% 10. Solve Tz and fx using least square and SVD, then fy = fx/alpha
a = zeros(N,2);
for i = 1:N
a(i,:) = [ x(i) r_R(1,:)*WC(i,1:3)'+r_Tx ];
end
b = -x.*WC(:,1:3)*r_R(3,:)';
sol = inv(a'*a)*a'*b;
r_Tz = sol(1);
r_fx = sol(2);
r_fy = r_fx/r_alpha;