我正在尝试从nvidia的仓库here.运行该教程代码 这是我的Jetson TX2上的控制台imagenet程序发生的情况:
nvidia@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ ./imagenet-console orange_0.pjg output_0.jpg
imagenet-console
args (3): 0 [./imagenet-console] 1 [orange_0.pjg] 2 [output_0.jpg]
imageNet -- loading classification network model from:
-- prototxt networks/googlenet.prototxt
-- model networks/bvlc_googlenet.caffemodel
-- class_labels networks/ilsvrc12_synset_words.txt
-- input_blob 'data'
-- output_blob 'prob'
-- batch_size 2
[TRT] TensorRT version 4.0.2
[TRT] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[TRT] cache file not found, profiling network model
[TRT] platform has FP16 support.
[TRT] loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
Weights for layer conv1/7x7_s2 doesn't exist
[TRT] CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer conv1/7x7_s2 doesn't exist
[TRT] CaffeParser: ERROR: Attempting to access NULL weights
[TRT] Parameter check failed at: ../builder/Network.cpp::addConvolution::40, condition: kernelWeights.values != NULL
error parsing layer type Convolution index 1
[TRT] failed to parse caffe network
failed to load networks/bvlc_googlenet.caffemodel
failed to load networks/bvlc_googlenet.caffemodel
imageNet -- failed to initialize.
imagenet-console: failed to initialize imageNet
我没有在Jetson板上安装Caffe,因为本教程明确指出不需要它。我不确定如果TRT可以正确缓存,则空权错误是否会得到解决。有什么想法吗?
答案 0 :(得分:0)
公司防火墙阻止了模型的正确下载。手动下载模型并将其放在networks文件夹中即可解决问题。