我正在用Python实现对象本地化。我遇到的一个问题是,当我在执行操作时调整可观察区域的大小时,我不知道如何同时更改基本事实框。因此,发生这种情况:
地面真相框未调整大小以准确地适合平面。因此,我无法正确定位。我当前用于格式化下一个状态的函数如下:
def next_state(init_input, b, b_prime, g, a):
"""
Returns the observable region of the next state.
Formats the next state's observable region, defined
by b_prime, to be of dimension (224, 224, 3). Adding 16
additional pixels of context around the original bounding box.
The ground truth box must be reformatted according to the
new observable region.
:param init_input:
The initial input volume of the current episode.
:param b:
The current state's bounding box.
:param b_prime:
The subsequent state's bounding box.
:param g:
The ground truth box of the target object.
:param a:
The action taken by the agent at the current step.
"""
# Determine the pixel coordinates of the observable region for the following state
context_pixels = 16
x1 = max(b_prime[0] - context_pixels, 0)
y1 = max(b_prime[1] - context_pixels, 0)
x2 = min(b_prime[2] + context_pixels, IMG_SIZE)
y2 = min(b_prime[3] + context_pixels, IMG_SIZE)
# Determine observable region
observable_region = cv2.resize(init_input[y1:y2, x1:x2], (224, 224))
# Difference between crop region and image dimensions
x1_diff = x1
y1_diff = y1
x2_diff = IMG_SIZE - x2
y2_diff = IMG_SIZE - y2
# Resize ground truth box
g[0] = int(g[0] - 0.5 * x1_diff) # x1
g[1] = int(g[1] - 0.5 * y1_diff) # y1
g[2] = int(g[2] + 0.5 * x2_diff) # x2
g[3] = int(g[3] + 0.5 * y2_diff) # y2
return observable_region, g
我似乎无法正确获得尺寸的变化。我遵循了this的帖子,以便最初调整边框的大小。但是,这种解决方案在这种情况下似乎不起作用。
边界框/地面真理框的格式为:b = [x1, y1, x2, y2]
init_input
的尺寸为(224, 224, 3)
。 IMG_SIZE = 224
和context_pixels = 16
这是另一个示例:
似乎地面真相框的大小正确,但是位置不正确。
我已经更新了上面的代码部分。比例因子似乎是解决问题的错误方法。通过仅增加/减去要放大的像素数,我已经接近了很多。我相信现在与插值有关,因此如果有人可以帮助实现完美,那将是巨大的帮助。
新示例:
提供了solution。
答案 0 :(得分:0)
我的问题在this帖子中被一个名为@lenik的用户解决。
在将比例因子应用于地面真值框g
的像素坐标之前,必须首先减去零偏移,以使x1, y1
成为0, 0
。这样可以使缩放正常工作。
因此,转换后任意随机点(x,y)
的坐标可以计算为:
x_new = (x - x1) * IMG_SIZE / (x2 - x1)
y_new = (y - y1) * IMG_SIZE / (y2 - y1)
在代码中以及与我的问题有关的解决方案如下:
def next_state(init_input, b_prime, g):
"""
Returns the observable region of the next state.
Formats the next state's observable region, defined
by b_prime, to be of dimension (224, 224, 3). Adding 16
additional pixels of context around the original bounding box.
The ground truth box must be reformatted according to the
new observable region.
:param init_input:
The initial input volume of the current episode.
:param b_prime:
The subsequent state's bounding box.
:param g:
The ground truth box of the target object.
"""
# Determine the pixel coordinates of the observable region for the following state
context_pixels = 16
x1 = max(b_prime[0] - context_pixels, 0)
y1 = max(b_prime[1] - context_pixels, 0)
x2 = min(b_prime[2] + context_pixels, IMG_SIZE)
y2 = min(b_prime[3] + context_pixels, IMG_SIZE)
# Determine observable region
observable_region = cv2.resize(init_input[y1:y2, x1:x2], (224, 224), interpolation=cv2.INTER_AREA)
# Resize ground truth box
g[0] = int((g[0] - x1) * IMG_SIZE / (x2 - x1)) # x1
g[1] = int((g[1] - y1) * IMG_SIZE / (y2 - y1)) # y1
g[2] = int((g[2] - x1) * IMG_SIZE / (x2 - x1)) # x2
g[3] = int((g[3] - y1) * IMG_SIZE / (y2 - y1)) # y2
return observable_region, g