import jetson.inference
import jetson.utils
import cv2
#import argparse
import sys
import numpy as np
width=720
height=480
vs=cv2.VideoCapture('b.m4v') #video input file
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5) #loading the model
#camera = jetson.utils.gstCamera(1280, 720, "/dev/video0") #using V4L2
display = jetson.utils.glDisplay() #initialting a display window
while display.IsOpen():
_,frame = vs.read() #reading a frmae
img = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA) #converting it to a bgra format for jetson util
img = jetson.utils.cudaFromNumpy(img) #converting image to cuda format from numpy array
#img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height) #running detections on each image and saving the results in detections
display.RenderOnce(img, width, height) #display the output frame with detection
display.SetTitle("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS())) #display title for the output feed
这是错误:
[OpenGL] glDisplay -- X screen 0 resolution: 1280x1024
[OpenGL] failed to create X11 Window.
jetson.utils -- PyDisplay_Dealloc()
Traceback (most recent call last):
File "pool1.py", line 16, in <module>
display = jetson.utils.glDisplay() #initialting a display window
Exception: jetson.utils -- failed to create glDisplay device
PyTensorNet_Dealloc()
答案 0 :(得分:0)
从技术上讲,这不是一个答案,但我认为这是问题所在。
这是我的错误工作代码的日志
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
W = 7 H = 100 C = 1
detectNet – maximum bounding boxes: 100
detectNet – loaded 91 class info entries
detectNet – number of object classes: 91
jetson.utils – PyDisplay_New()
jetson.utils – PyDisplay_Init()
[OpenGL] glDisplay – X screen 0 resolution: 1280x1024
[OpenGL] failed to create X11 Window.
jetson.utils – PyDisplay_Dealloc()
Traceback (most recent call last):
File “pool1.py”, line 18, in
display = jetson.utils.glDisplay() #initialting a display window
Exception: jetson.utils – failed to create glDisplay device
PyTensorNet_Dealloc()
这是内置日志工作的日志
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
W = 7 H = 100 C = 1
detectNet – maximum bounding boxes: 100
detectNet – loaded 91 class info entries
detectNet – number of object classes: 91
jetson.utils – PyCamera_New()
jetson.utils – PyCamera_Init()
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera /dev/video0
[gstreamer] gstCamera pipeline string:
v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2, camera /dev/video0
jetson.utils – PyDisplay_New()
jetson.utils – PyDisplay_Init()
[OpenGL] glDisplay – X screen 0 resolution: 1280x1024
[OpenGL] glDisplay – display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
您可以清楚地看到问题所在。
我的主要查询是gstramer,甚至在nano上的opencv 4.1.1也支持