我从姿势估计tflite模型开始,以获取人类的关键点。
https://www.tensorflow.org/lite/models/pose_estimation/overview
我首先要拟合单个图像或一个人并调用模型:
img = cv.imread('photos\standing\\3.jpg')
img = tf.reshape(tf.image.resize(img, [257,257]), [1,257,257,3])
model = tf.lite.Interpreter('models\posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite')
model.allocate_tensors()
input_details = model.get_input_details()
output_details = model.get_output_details()
floating_model = input_details[0]['dtype'] == np.float32
if floating_model:
img = (np.float32(img) - 127.5) / 127.5
model.set_tensor(input_details[0]['index'], img)
model.invoke()
output_data = model.get_tensor(output_details[0]['index'])# o()
offset_data = model.get_tensor(output_details[1]['index'])
results = np.squeeze(output_data)
offsets_results = np.squeeze(offset_data)
print("output shape: {}".format(output_data.shape))
np.savez('sample3.npz', results, offsets_results)
但是我正在努力正确地解析输出以获取每个身体部位的坐标/置信度。有没有人有解释该模型结果的python示例? (例如:使用它们将关键点映射回原始图像)
我的代码(实际上是直接从模型输出中获取np数组的类的片段):
def get_keypoints(self, data):
height, width, num_keypoints = data.shape
keypoints = []
for keypoint in range(0, num_keypoints):
maxval = data[0][0][keypoint]
maxrow = 0
maxcol = 0
for row in range(0, width):
for col in range(0,height):
if data[row][col][keypoint] > maxval:
maxrow = row
maxcol = col
maxval = data[row][col][keypoint]
keypoints.append(KeyPoint(keypoint, maxrow, maxcol, maxval))
# keypoints = [Keypoint(x,y,z) for x,y,z in ]
return keypoints
def get_image_coordinates_from_keypoints(self, offsets):
height, width, depth = (257,257,3)
# [(x,y,confidence)]
coords = [{ 'point': k.body_part,
'location': (k.x / (width - 1)*width + offsets[k.y][k.x][k.index],
k.y / (height - 1)*height + offsets[k.y][k.x][k.index]),
'confidence': k.confidence}
for k in self.keypoints]
return coords
这里的某些坐标是负数,这是不正确的。我的错误在哪里?
答案 0 :(得分:6)
import numpy as np
对于输出热图和偏移量的姿势估计模型。期望的点可以通过以下方式获得:
在热图上执行S型操作:
scores = sigmoid(heatmaps)
这些姿势的每个关键点通常由一个二维矩阵表示,该矩阵中的最大值与模型认为该点位于输入图像中的位置有关。使用argmax2D获取每个矩阵中该值的x和y索引,该值本身表示置信度值:
x,y = np.unravel_index(np.argmax(scores[:,:,keypointindex]),
scores[:,:,keypointindex].shape)
confidences = scores[x,y,keypointindex]
使用x,y查找对应的偏移矢量,以计算关键点的最终位置:
offset_vector = (offsets[y,x,keypointindex], offsets[y,x,num_keypoints+keypointindex])
获得关键点坐标和偏移量后,您可以使用()计算关键点的最终位置:
image_positions = np.add(np.array(heatmap_positions) * output_stride, offset_vectors)
请参见this,以确定如何获得输出的大步幅(如果您还没有的话)。 tflite姿势估计的输出步幅为32。
一个函数,该函数从该“姿势估计”模型获取输出并输出关键点。不包括KeyPoint
类
def get_keypoints(self, heatmaps, offsets, output_stride=32):
scores = sigmoid(heatmaps)
num_keypoints = scores.shape[2]
heatmap_positions = []
offset_vectors = []
confidences = []
for ki in range(0, num_keypoints ):
x,y = np.unravel_index(np.argmax(scores[:,:,ki]), scores[:,:,ki].shape)
confidences.append(scores[x,y,ki])
offset_vector = (offsets[y,x,ki], offsets[y,x,num_keypoints+ki])
heatmap_positions.append((x,y))
offset_vectors.append(offset_vector)
image_positions = np.add(np.array(heatmap_positions) * output_stride, offset_vectors)
keypoints = [KeyPoint(i, pos, confidences[i]) for i, pos in enumerate(image_positions)]
return keypoints
关键点类别:
PARTS = {
0: 'NOSE',
1: 'LEFT_EYE',
2: 'RIGHT_EYE',
3: 'LEFT_EAR',
4: 'RIGHT_EAR',
5: 'LEFT_SHOULDER',
6: 'RIGHT_SHOULDER',
7: 'LEFT_ELBOW',
8: 'RIGHT_ELBOW',
9: 'LEFT_WRIST',
10: 'RIGHT_WRIST',
11: 'LEFT_HIP',
12: 'RIGHT_HIP',
13: 'LEFT_KNEE',
14: 'RIGHT_KNEE',
15: 'LEFT_ANKLE',
16: 'RIGHT_ANKLE'
}
class KeyPoint():
def __init__(self, index, pos, v):
x, y = pos
self.x = x
self.y = y
self.index = index
self.body_part = PARTS.get(index)
self.confidence = v
def point(self):
return int(self.y), int(self.x)
def to_string(self):
return 'part: {} location: {} confidence: {}'.format(
self.body_part, (self.x, self.y), self.confidence)