我构建了一个用于视线估计的应用程序,并使用UnityEyes工具来生成标记的合成眼图。 每个.jpg图片文件都将与一个关联的.json元数据文件一起保存,该文件包含以下内容:
{
"interior_margin_2d": [ # Screen-space interior margin landmarks
"(202.7042, 186.4788, 9.5512)", … # x, y, z (can ignore z)
],
"caruncle_2d": [ # Screen-space eye-corner (caruncle) landmarks
"(191.9471, 175.4047, 9.6683)", … # x, y, z (can ignore z)
],
"iris_2d": [ # Screen-space iris boundary landmarks
"(213.3930, 195.4109, 9.1951)", … # x, y, z (can ignore z)
],
"eye_details": {
"look_vec": "(-0.3633, 0.0937, -0.9270, 0.0000)", # Gaze vector in camera-space (x, y, z)
"pupil_size": "0.05249219", # Pupil size (arbitrary units)
"iris_size": "0.9090334", # Iris size (arbitrary units)
"iris_texture": "eyeball_amber" # Iris color
},
"lighting_details": … # Illumination details
"eye_region_details": … # Shape PCA details
"head_pose": "(351.2107, 161.3652, 0.0000)" # Euler angle rotation from camera to world
}
我的问题是如何在相机坐标中提取相对于头部姿势的注视矢量(即不依赖于头部姿势)?
这是UnityEyes网站: https://www.cl.cam.ac.uk/research/rainbow/projects/unityeyes/tutorial.html