用Chainer编写的CNN模型的培训代码

时间:2019-02-12 23:51:26

标签: python conv-neural-network training-data chainer

我正在为TwoStream-IQA写一个训练代码,它是一个两流卷积神经网络。该模型预测通过网络的两个流评估的补丁的质量得分。在下面的培训中,我使用了上面GitHub链接中提供的测试数据集。

培训代码如下:

## prepare training data 
test_label_path = 'data_list/test.txt'
test_img_path = 'data/live/'
test_Graimg_path = 'data/live_grad/'
save_model_path = '/models/nr_sana_2stream.model'

patches_per_img = 256
patchSize = 32

print('-------------Load data-------------')
final_train_set = []
with open(test_label_path, 'rt') as f:
    for l in f:
        line, la = l.strip().split()  # for debug

        tic = time.time()
        full_path = os.path.join(test_img_path, line)
        Grafull_path = os.path.join(test_Graimg_path, line)

        f = Image.open(full_path)
        Graf = Image.open(Grafull_path)
        img = np.asarray(f, dtype=np.float32)
        Gra = np.asarray(Graf, dtype=np.float32)
        img = img.transpose(2, 0, 1)
        Gra = Gra.transpose(2, 0, 1)

        img1 = np.zeros((1, 3, Gra.shape[1], Gra.shape[2]))
        img1[0, :, :, :] = img
        Gra1 = np.zeros((1, 3, Gra.shape[1], Gra.shape[2]))
        Gra1[0, :, :, :] = Gra

        patches = extract_patches(img, (3, patchSize, patchSize), patchSize)
        Grapatches = extract_patches(Gra, (3, patchSize, patchSize), patchSize)

        X = patches.reshape((-1, 3, patchSize, patchSize))
        GraX = Grapatches.reshape((-1, 3, patchSize, patchSize))

        temp_slice1 = [X[int(float(index))] for index in range(256)]
        temp_slice2 = [GraX[int(float(index))] for index in range(256)]
        ##############################################  
        for j in range(len(temp_slice1)):
            temp_slice1[j] = xp.array(temp_slice1[j].astype(np.float32))
            temp_slice2[j] = xp.array(temp_slice2[j].astype(np.float32))

            final_train_set.append((temp_slice1[j], temp_slice2[j], int(la)))

    final_train_set = np.asarray(final_train_set)       
        ##############################################  

#
print('--------------Done!----------------')

print('--------------Iterator!----------------')    
train_iter = iterators.SerialIterator(final_train_set, batch_size=4)
optimizer = optimizers.Adam()
optimizer.use_cleargrads()
optimizer.setup(model)

updater = training.StandardUpdater(train_iter, optimizer, device=0)

print('--------------Trainer!----------------') 
trainer = training.Trainer(updater, (50, 'epoch'), out='result')

trainer.extend(extensions.LogReport())

trainer.extend(extensions.PrintReport(['epoch', 'iteration', 'main/loss', 'elapsed_time']))

print('--------------Running trainer!----------------') 
trainer.run()

但是代码会产生如下错误:

Exception in main training loop: Unsupported dtype object
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/chainer/training/trainer.py", line 307, in run
    update()
  File "/usr/local/lib/python2.7/dist-packages/chainer/training/updaters/standard_updater.py", line 165, in update
    self.update_core()
  File "/usr/local/lib/python2.7/dist-packages/chainer/training/updaters/standard_updater.py", line 171, in update_core
    in_arrays = self.converter(batch, self.device)
  File "/usr/local/lib/python2.7/dist-packages/chainer/dataset/convert.py", line 149, in concat_examples
    return to_device(device, _concat_arrays(batch, padding))
  File "/usr/local/lib/python2.7/dist-packages/chainer/dataset/convert.py", line 37, in to_device
    return cuda.to_gpu(x, device)
  File "/usr/local/lib/python2.7/dist-packages/chainer/backends/cuda.py", line 288, in to_gpu
    return _array_to_gpu(array, device_, stream)
  File "/usr/local/lib/python2.7/dist-packages/chainer/backends/cuda.py", line 336, in _array_to_gpu
    return cupy.asarray(array)
  File "/usr/local/lib/python2.7/dist-packages/cupy/creation/from_data.py", line 60, in asarray
    return core.array(a, dtype, False)
  File "cupy/core/core.pyx", line 2174, in cupy.core.core.array
  File "cupy/core/core.pyx", line 2207, in cupy.core.core.array
Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):
  File "train.py", line 126, in <module>
    trainer.run()
  File "/usr/local/lib/python2.7/dist-packages/chainer/training/trainer.py", line 321, in run
    six.reraise(*sys.exc_info())
  File "/usr/local/lib/python2.7/dist-packages/chainer/training/trainer.py", line 307, in run
    update()
  File "/usr/local/lib/python2.7/dist-packages/chainer/training/updaters/standard_updater.py", line 165, in update
    self.update_core()
  File "/usr/local/lib/python2.7/dist-packages/chainer/training/updaters/standard_updater.py", line 171, in update_core
    in_arrays = self.converter(batch, self.device)
  File "/usr/local/lib/python2.7/dist-packages/chainer/dataset/convert.py", line 149, in concat_examples
    return to_device(device, _concat_arrays(batch, padding))
  File "/usr/local/lib/python2.7/dist-packages/chainer/dataset/convert.py", line 37, in to_device
    return cuda.to_gpu(x, device)
  File "/usr/local/lib/python2.7/dist-packages/chainer/backends/cuda.py", line 288, in to_gpu
    return _array_to_gpu(array, device_, stream)
  File "/usr/local/lib/python2.7/dist-packages/chainer/backends/cuda.py", line 336, in _array_to_gpu
    return cupy.asarray(array)
  File "/usr/local/lib/python2.7/dist-packages/cupy/creation/from_data.py", line 60, in asarray
    return core.array(a, dtype, False)
  File "cupy/core/core.pyx", line 2174, in cupy.core.core.array
  File "cupy/core/core.pyx", line 2207, in cupy.core.core.array
ValueError: Unsupported dtype object

我使用了上面提供的github链接中的数据集。 我是Chainer的新手,请帮忙!

2 个答案:

答案 0 :(得分:2)

final_train_set.append((temp_slice1[j], temp_slice2[j], int(la)))

这使final_train_set成为混合类型(numpy.ndarrayint)元组的列表。 因此np.asarray(final_train_set)的结果是dtype = numpy.object,Chainer不支持。

为了将其传递给SerialIterator,我认为正确的方法是

# list of tuples of data and labels
final_train_set.append((
    numpy.asarray((temp_slice1[j], temp_slice2[j])).astype(numpy.float32),
    int(la)
))
循环后

不执行任何操作

答案 1 :(得分:0)

错误说

  

ValueError:不支持的dtype对象

Chainer支持numpy.float32cupy.float32数组。尝试按如下所示转换数据数组dtype怎么样?

final_train_set = np.asarray(final_train_set).astype(np.float32)