我是gluon的新手,所以我决定运行这些示例以熟悉编码风格(几年前我使用keras,这种混合风格对我有点困惑)。
我的问题是我可以运行这些示例,但是在成功执行了该示例(这是一个jupyter笔记本)上的每个单元之后,我上传了一个外部图像,并且网络似乎无法检测任何对象。 我在 02上粘贴了相同的单元格。使用预先训练的Faster RCNN模型进行预测,并且预先训练的网络在检测图像上的每个人时都没有问题,因此在我看来示例中的模型没有得到正确训练。
这件事发生在别人身上吗?
我想念什么吗?
提前谢谢!
(顺便说一句,我尝试取消对训练循环的第32行(带有utograd.backward的运动)的注释,并在没有运气的情况下更改了同一循环的突破极限)
链接
在执行原始示例以及下面的单元格时,我遇到了麻烦。
02)https://gluon-cv.mxnet.io/build/examples_detection/demo_faster_rcnn.html
06)https://gluon-cv.mxnet.io/build/examples_detection/train_faster_rcnn_voc.html
我的测试图像
单元格以检测图像上的对象
short, max_size = 600, 800
RCNN_transform = presets.rcnn.FasterRCNNDefaultTrainTransform(short, max_size)
myImg = 'unnamed.jpg'
x, img = data.transforms.presets.rcnn.load_test(myImg)
box_ids, scores, bboxes = net(x)
ax = utils.viz.plot_bbox(img, bboxes[0], scores[0], box_ids[0], class_names=net.classes)
plt.show()
系统信息(如果有)
我正在使用我的个人计算机,也正在使用google colab,结果相同,但以防万一...
操作系统:Ubuntu 18.04
硬件
$ hwinfo --short
cpu:
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2700 MHz
graphics card:
nVidia GM107M [GeForce GTX 960M]
Intel HD Graphics 530
NVidia驱动程序
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 960M Off | 00000000:02:00.0 Off | N/A |
| N/A 41C P5 N/A / N/A | 665MiB / 4046MiB | 23% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 2560 G /usr/lib/xorg/Xorg 308MiB |
| 0 2921 G /usr/bin/gnome-shell 132MiB |
| 0 3741 G ...quest-channel-token=7390050445218241480 31MiB |
| 0 5455 G ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files 176MiB |
+-----------------------------------------------------------------------------+
CUDA
$nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
安装了MxNet和Gluon
$ pip install mxnet-cu102mkl
$ pip install --upgrade mxnet-cu102mkl gluoncv
编辑 我一直在修改训练循环,到目前为止,这是我所做的。 第三个for循环之后的第一行只是将数据存储在GPU上。
#net.hybridize()
epochs = 50
for epoch in range(epochs):
print("epoch: ", epoch,"---------------------------------")
batch_size = 10
for ib, batch in enumerate(train_loader):
#print(ib)
if ib > 500:
break
for dataa, label, rpn_cls_targets, rpn_box_targets, rpn_box_masks in zip(*batch):
dataa = dataa.as_in_context(mx.gpu(0))
label = label.as_in_context(mx.gpu(0)).expand_dims(0)
rpn_cls_targets = rpn_cls_targets.as_in_context(mx.gpu(0))
rpn_box_targets = rpn_box_targets.as_in_context(mx.gpu(0))
rpn_box_masks = rpn_box_masks.as_in_context(mx.gpu(0))
gt_label = label[:, :, 4:5]
gt_box = label[:, :, :4]
with autograd.record():
# network forward
cls_preds, box_preds, roi, samples, matches, rpn_score, rpn_box, anchors, cls_targets, box_targets, box_masks, _ = net(dataa.expand_dims(0), gt_box, gt_label)
# losses of rpn
rpn_score = rpn_score.squeeze(axis=-1)
num_rpn_pos = (rpn_cls_targets >= 0).sum()
rpn_loss1 = rpn_cls_loss(rpn_score, rpn_cls_targets,rpn_cls_targets >= 0) * rpn_cls_targets.size / num_rpn_pos
rpn_loss2 = rpn_box_loss(rpn_box, rpn_box_targets,rpn_box_masks) * rpn_box.size / num_rpn_pos
# losses of rcnn
num_rcnn_pos = (cls_targets >= 0).sum()
rcnn_loss1 = rcnn_cls_loss(cls_preds, cls_targets,cls_targets >= 0) * cls_targets.size / cls_targets.shape[0] / num_rcnn_pos
rcnn_loss2 = rcnn_box_loss(box_preds, box_targets, box_masks) * box_preds.size / box_preds.shape[0] / num_rcnn_pos
# some standard gluon training steps:
autograd.backward([rpn_loss1, rpn_loss2, rcnn_loss1, rcnn_loss2])
trainer.step(batch_size)
我对培训师感到怀疑,我在其他示例中也发现了这一点,但不确定在这种情况下是否可行。
trainer = gluon.Trainer(net.collect_params(), 'sgd',{'learning_rate': 0.01, 'wd': 0.05, 'momentum': 0.9})
编辑
这是我一直在使用的.ipynb文件的副本(google-colab版本) https://drive.google.com/file/d/1WevimDyTP1lvq_A0OBRMgC-PH8pK4iBv/view?usp=sharing