我正在尝试一个计算机视觉项目,以确定足球图像中发生的投影变换。我检测消失点,获得2个点匹配,并根据交叉比率计算从模型场点到图像点的投影。这对于几乎所有点都非常有效,但是对于点(位于相机后面),投影完全错误。您知道为什么以及如何解决这个问题吗?
它基于文章Fast 2D model-to-image registration using vanishing points for sports video analysis,并且使用了第3页上给出的投影函数。我也尝试使用不同的方法(即基于交叉点)来计算结果,但是结果是相同的:< / p>
应该有一条底线,但是那条线预计会向右偏出。
我还尝试使用十进制来查看它是否是负的溢出错误,但这对我来说意义不大,因为经过测试,在Wolfram Alpha上显示了相同的结果。
def Projection(vanpointH, vanpointV, pointmatch2, pointmatch1):
"""
:param vanpointH:
:param vanpointV:
:param pointmatch1:
:param pointmatch2:
:returns function that takes a single modelpoint as input:
"""
X1 = pointmatch1[1]
point1field = pointmatch1[0]
X2 = pointmatch2[1]
point2field = pointmatch2[0]
point1VP = linecalc.calcLineEquation([[point1field[0], point1field[1], vanpointH[0], vanpointH[1], 1]])
point1VP2 = linecalc.calcLineEquation([[point1field[0], point1field[1], vanpointV[0], vanpointV[1], 1]])
point2VP = linecalc.calcLineEquation([[point2field[0], point2field[1], vanpointV[0], vanpointV[1], 1]])
point2VP2 = linecalc.calcLineEquation([[point2field[0], point2field[1], vanpointH[0], vanpointH[1], 1]])
inters = linecalc.calcIntersections([point1VP, point2VP])[0]
inters2 = linecalc.calcIntersections([point1VP2, point2VP2])[0]
def lambdaFcnX(X, inters):
# This fcn provides the solution of where the point to be projected is, according to the matching,
# on the line connecting point1 and vanpointH. Based only on that the cross ratio is the same as in the model field
return (((X[0] - X1[0]) * (inters[1] - point1field[1])) / ((X2[0] - X1[0]) * (inters[1] - vanpointH[1])))
def lambdaFcnX2(X, inters):
# This fcn provides the solution of where the point to be projected is, according to the matching,
# on the line connecting point2 and vanpointH, Based only on that the cross ratio is the same as in the model field
return (((X[0] - X1[0]) * (point2field[1] - inters[1])) / ((X2[0] - X1[0]) * (point2field[1] - vanpointH[1])))
def lambdaFcnY(X, v1, v2):
# return (((X[1] - X1[1]) * (np.subtract(v2,v1))) / ((X2[1] - X1[1]) * (np.subtract(v2, vanpointV))))
return (((X[1] - X1[1]) * (v2[0] - v1[0])) / ((X2[1] - X1[1]) * (v2[0] - vanpointV[0])))
def projection(Point):
lambdaPointx = lambdaFcnX(Point, inters)
lambdaPointx2 = lambdaFcnX2(Point, inters2)
v1 = (np.multiply(-(lambdaPointx / (1 - lambdaPointx)), vanpointH) + np.multiply((1 / (1 - lambdaPointx)),
point1field))
v2 = (np.multiply(-(lambdaPointx2 / (1 - lambdaPointx2)), vanpointH) + np.multiply((1 / (1 - lambdaPointx2)),
inters2))
lambdaPointy = lambdaFcnY(Point, v1, v2)
point = np.multiply(-(lambdaPointy / (1 - lambdaPointy)), vanpointV) + np.multiply((1 / (1 - lambdaPointy)), v1)
return point
return projection
match1 = ((650,390,1),(2478,615,1))
match2 = ((740,795,1),(2114,1284,1))
vanpoint1 = [-2.07526585e+03, -5.07454315e+02, 1.00000000e+00]
vanpoint2 = [ 5.53599881e+03, -2.08240612e+02, 1.00000000e+00]
model = Projection(vanpoint2,vanpoint1,match2,match1)
model((110,1597))
假设消失点是
vanpoint1 = [-2.07526585e+03, -5.07454315e+02, 1.00000000e+00]
vanpoint2 = [ 5.53599881e+03, -2.08240612e+02, 1.00000000e+00]
和两个匹配项是:
match1 = ((650,390,1),(2478,615,1))
match2 = ((740,795,1),(2114,1284,1))
这些工作几乎适用于图片中的所有点。但是,左底点完全关闭并获得图像坐标
[ 4.36108177e+04, -1.13418258e+04]
这种情况从(312,1597)
开始下降;对于(312,1597)
,结果应为[-2.34989787e+08, 6.87155603e+07]
。
为什么它会一直移动到4000?如果我计算了摄像头矩阵,然后该点位于摄像头后面,也许会很有意义。但是,由于我所做的实际上与单应性估计(2D映射)相似,因此我无法从几何意义上讲。但是,我对此的认识绝对是有限的。
编辑:这可能与射影平面的拓扑有关并且它是不可定向的(环绕)吗?我对拓扑的了解不是应该的……
答案 0 :(得分:0)
好,知道了。这对其他人可能没有太大意义,但对我而言(如果有人遇到同样的问题...)
从几何上讲,当使用等效方法时,我实现了以下目的,其中v1和v2是根据不同的消失点计算的,而我是根据连接点与消失点的线的交点进行投影的。在某些时候,这些线变为平行,然后交点实际上完全位于另一侧。这是有道理的;我花了一段时间才意识到它确实如此。
在上面的代码中,称为lambdapointy的最后一个交叉比率变为1,然后是上面的比率。在这里发生同样的事情,但是最容易根据交叉点进行可视化。
也知道如何解决它;以防万一其他人尝试这样的代码。