Kinect2的对象检测API“无法处理此数据类型”错误

时间:2018-08-06 18:52:09

标签: opencv tensorflow python-imaging-library kinect object-detection-api

我安装了对象检测API,并尝试使用适用于Python 3.6的pylibfrenect2将其与Kinect2连接。但是当我运行这段代码时:

with detection_graph.as_default():
    with tf.Session(graph=detection_graph) as sess:
        while True:
            frames = listener.waitForNewFrame()
            color = frames["color"].asarray(np.uint8)
            color = cv2.cvtColor(color, cv2.COLOR_BGRA2BGR)
            image_np_expanded = np.expand_dims(color, axis=0)

            # NOTE for visualization:
            # cv2.imshow without OpenGL backend seems to be quite slow to draw all
            # things below. Try commenting out some imshow if you don't have a fast
            # visualization backend.

            image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
            # Each box represents a part of the image where a particular object was detected.
            boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
            # Each score represent how level of confidence for each of the objects.
            # Score is shown on the result image, together with the class label.
            scores = detection_graph.get_tensor_by_name('detection_scores:0')
            classes = detection_graph.get_tensor_by_name('detection_classes:0')
            num_detections = detection_graph.get_tensor_by_name('num_detections:0')
            # Actual detection.
            (boxes, scores, classes, num_detections) = sess.run([boxes, scores, classes, num_detections],
                                                                feed_dict={image_tensor: image_np_expanded})
            # Visualization of the results of a detection.
            vis_util.visualize_boxes_and_labels_on_image_array(image_np_expanded, np.squeeze(boxes), 
                                                               np.squeeze(classes).astype(np.int32),
            np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8)

            cv2.imshow('object detection', color)
            listener.release(frames)

            key = cv2.waitKey(delay=1)
            if key == ord('q'):
                break

        device.stop()
        device.close()
        sys.exit(0)

我有一个错误:

KeyError                                  Traceback (most recent call last)
/anaconda3/envs/MaskRCNN/lib/python3.6/site-packages/PIL/Image.py in fromarray(obj, mode)
   2427             typekey = (1, 1) + shape[2:], arr['typestr']
-> 2428             mode, rawmode = _fromarray_typemap[typekey]
   2429         except KeyError:

KeyError: ((1, 1, 1920, 3), '|u1')

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-11-792d57e53fdc> in <module>()
     26             vis_util.visualize_boxes_and_labels_on_image_array(image_np_expanded, np.squeeze(boxes), 
     27                                                                np.squeeze(classes).astype(np.int32),
---> 28             np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8)
     29 
     30             cv2.imshow('object detection', color)

~/Desktop/diploma/object_detection/utils/visualization_utils.py in visualize_boxes_and_labels_on_image_array(image, boxes, classes, scores, category_index, instance_masks, keypoints, use_normalized_coordinates, max_boxes_to_draw, min_score_thresh, agnostic_mode, line_thickness)
    416         thickness=line_thickness,
    417         display_str_list=box_to_display_str_map[box],
--> 418         use_normalized_coordinates=use_normalized_coordinates)
    419     if keypoints is not None:
    420       draw_keypoints_on_image_array(

~/Desktop/diploma/object_detection/utils/visualization_utils.py in draw_bounding_box_on_image_array(image, ymin, xmin, ymax, xmax, color, thickness, display_str_list, use_normalized_coordinates)
    113       coordinates as absolute.
    114   """
--> 115   image_pil = Image.fromarray(np.uint8(image)).convert('RGB')
    116   draw_bounding_box_on_image(image_pil, ymin, xmin, ymax, xmax, color,
    117                              thickness, display_str_list,

/anaconda3/envs/MaskRCNN/lib/python3.6/site-packages/PIL/Image.py in fromarray(obj, mode)
   2429         except KeyError:
   2430             # print(typekey)
-> 2431             raise TypeError("Cannot handle this data type")
   2432     else:
   2433         rawmode = mode

TypeError: Cannot handle this data type

对象检测API在本地相机上可以正常工作。我试图将其转换为PIL图片,例如: 从PIL进口* 导入PIL.Image 颜色= PIL.Image.fromarray(np.uint8(color))

这可能是什么问题?

所以我终于做到了

color = frames["color"].asarray(np.uint8)
color = cv2.cvtColor(color, cv2.COLOR_BGRA2BGR)
color = cv2.cvtColor(color,cv2.COLOR_BGR2RGB)
color = cv2.resize(color, (1280,720))
image_np_expanded = np.expand_dims(color, axis=0)

我在jupyter中看到它正在与*一起运行,但是相机流中没有窗口,并且命令提示符如下运行:

[Debug] [DepthPacketStreamParser] skipping depth packet
[Debug] [RgbPacketStreamParser] skipping rgb packet!
[Debug] [DepthPacketStreamParser] skipping depth packet
[Info] [DepthPacketStreamParser] 13 packets were lost
[Info] [OpenGLDepthPacketProcessor] avg. time: 26.0384ms -> ~38.4048Hz
[Info] [VTRgbPacketProcessor] avg. time: 20.3174ms -> ~49.2189Hz

它看起来并不重要,因为我没有在代码中使用深度数据。但是很奇怪。

0 个答案:

没有答案