我正在使用 fast-rcnn 并尝试为新类(标签)训练系统 我遵循了这个:https://github.com/EdisonResearch/fast-rcnn/tree/master/help/train
放置图像
放置注释
使用所有图像名称前缀
准备选择性搜索输出:train.mat
运行train_net.py时失败,出现以下错误:
./tools/train_net.py --gpu 0 --solver models/VGG_1024_pascal2007/solver.prototxt --imdb voc_2007_train_top_5000
Called with args: Namespace(cfg_file=None, gpu_id=0, imdb_name='voc_2007_train_top_5000', max_iters=40000, pretrained_model=None, randomize=False, solver='models/VGG_1024_pascal2007/solver.prototxt') Using config: {'DEDUP_BOXES': 0.0625, 'EPS': 1e-14, 'EXP_DIR': 'default', 'PIXEL_MEANS': array([[[ 102.9801, 115.9465, 122.7717]]]), 'RNG_SEED': 3, 'ROOT_DIR': '/home/hagay/fast-rcnn', 'TEST': {'BBOX_REG': True,
'MAX_SIZE': 1000,
'NMS': 0.3,
'SCALES': [600],
'SVM': False}, 'TRAIN': {'BATCH_SIZE': 128,
'BBOX_REG': True,
'BBOX_THRESH': 0.5,
'BG_THRESH_HI': 0.5,
'BG_THRESH_LO': 0.1,
'FG_FRACTION': 0.25,
'FG_THRESH': 0.5,
'IMS_PER_BATCH': 2,
'MAX_SIZE': 1000,
'SCALES': [600],
'SNAPSHOT_INFIX': '',
'SNAPSHOT_ITERS': 10000,
'USE_FLIPPED': True,
'USE_PREFETCH': False}} Loaded dataset `voc_2007_train` for training Appending horizontally-flipped training examples... voc_2007_train gt roidb loaded from /home/hagay/fast-rcnn/data/cache/voc_2007_train_gt_roidb.pkl /usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py:2507: VisibleDeprecationWarning: `rank` is deprecated; use the `ndim` attribute or function instead. To find the rank of a matrix see `numpy.linalg.matrix_rank`. VisibleDeprecationWarning) wrote ss roidb to /home/hagay/fast-rcnn/data/cache/voc_2007_train_selective_search_IJCV_top_5000_roidb.pkl Traceback (most recent call last): File "./tools/train_net.py", line 80, in <module>
roidb = get_training_roidb(imdb) File "/home/hagay/fast-rcnn/tools/../lib/fast_rcnn/train.py", line 107, in get_training_roidb
imdb.append_flipped_images() File "/home/hagay/fast-rcnn/tools/../lib/datasets/imdb.py", line 104, in append_flipped_images
assert (boxes[:, 2] >= boxes[:, 0]).all() AssertionError
我的问题是:
__background__
类?答案 0 :(得分:0)
boxes[:,2] < boxes[:, 0]
,boxes[:, 2]
是边界框的x-max,而boxes[:, 0]
是x-min。所以问题与地区提案有关。我也遇到过这个问题。我发现这是溢出的原因。我记得盒子的dtype是np.uint8(需要检查),如果图片太大,你会收到这个错误。答案 1 :(得分:0)
我迟到了,但是当我编辑代码时,这是我的解决方案的yakkity-hack
for b in range(len(boxes)):
if boxes[b][2] < boxes[b][0]:
boxes[b][0] = 0
assert (boxes[:, 2] >= boxes[:, 0]).all()
有更聪明的方法可以做到这一点,因为每个研究生似乎都指出了这一点,但这很好。
答案 2 :(得分:0)
查看以下博客文章第4部分第4期中描述的解决方案。解决方案是翻转x1和x2坐标值。
https://huangying-zhan.github.io/2016/09/22/detection-faster-rcnn.html
从链接中复制以下内容:
框[:,0]&gt;方框[:,2]
解决方案:在imdb.py中添加以下代码块
def append_flipped_images(self):
num_images = self.num_images
widths = self._get_widths()
for i in xrange(num_images):
boxes = self.roidb[i]['boxes'].copy()
oldx1 = boxes[:, 0].copy()
oldx2 = boxes[:, 2].copy()
boxes[:, 0] = widths[i] - oldx2
boxes[:, 2] = widths[i] - oldx1
for b in range(len(boxes)):
if boxes[b][2] < boxes[b][0]:
boxes[b][0]=0
assert (boxes[:, 2] >= boxes[:, 0]).all()