Theano:如何有效撤消/撤消最大汇集

时间:2015-05-12 17:17:46

标签: python optimization neural-network theano

我正在使用Theano 0.7创建一个使用 convolutional neural net max-pooling(即通过仅保留局部最大值来缩小矩阵)。

为了“撤消”或“反转”最大池化步骤,一种方法是将最大值的位置存储为辅助数据,然后通过制作大量的零和使用来简单地重新创建非池化数据将最大值放置在适当位置的辅助位置。

以下是我目前的做法:

import numpy as np
import theano
import theano.tensor as T

minibatchsize = 2
numfilters = 3
numsamples = 4
upsampfactor = 5

# HERE is the function that I hope could be improved
def upsamplecode(encoded, auxpos):
    shp = encoded.shape
    upsampled = T.zeros((shp[0], shp[1], shp[2] * upsampfactor))
    for whichitem in range(minibatchsize):
        for whichfilt in range(numfilters):
            upsampled = T.set_subtensor(upsampled[whichitem, whichfilt, auxpos[whichitem, whichfilt, :]], encoded[whichitem, whichfilt, :])
    return upsampled


totalitems = minibatchsize * numfilters * numsamples

code = theano.shared(np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples)))

auxpos = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples)) % upsampfactor  # arbitrary positions within a bin
auxpos += (np.arange(4) * 5).reshape((1,1,-1)) # shifted to the actual temporal bin location
auxpos = theano.shared(auxpos.astype(np.int))

print "code:"
print code.get_value()
print "locations:"
print auxpos.get_value()
get_upsampled = theano.function([], upsamplecode(code, auxpos))
print "the un-pooled data:"
print get_upsampled()
  

(顺便说一句,在这种情况下,我有一个3D张量,它只是第三个轴得到最大化。使用图像数据的人可能会看到两个维度得到最大化。)

输出结果为:

code:
[[[ 0  1  2  3]
  [ 4  5  6  7]
  [ 8  9 10 11]]

 [[12 13 14 15]
  [16 17 18 19]
  [20 21 22 23]]]
locations:
[[[ 0  6 12 18]
  [ 4  5 11 17]
  [ 3  9 10 16]]

 [[ 2  8 14 15]
  [ 1  7 13 19]
  [ 0  6 12 18]]]
the un-pooled data:
[[[  0.   0.   0.   0.   0.   0.   1.   0.   0.   0.   0.   0.   2.   0.
     0.   0.   0.   0.   3.   0.]
  [  0.   0.   0.   0.   4.   5.   0.   0.   0.   0.   0.   6.   0.   0.
     0.   0.   0.   7.   0.   0.]
  [  0.   0.   0.   8.   0.   0.   0.   0.   0.   9.  10.   0.   0.   0.
     0.   0.  11.   0.   0.   0.]]

 [[  0.   0.  12.   0.   0.   0.   0.   0.  13.   0.   0.   0.   0.   0.
    14.  15.   0.   0.   0.   0.]
  [  0.  16.   0.   0.   0.   0.   0.  17.   0.   0.   0.   0.   0.  18.
     0.   0.   0.   0.   0.  19.]
  [ 20.   0.   0.   0.   0.   0.  21.   0.   0.   0.   0.   0.  22.   0.
     0.   0.   0.   0.  23.   0.]]]

此方法有效,但它是瓶颈,占用了我计算机的大部分时间(我认为set_subtensor调用可能意味着cpu< - > gpu数据复制)。那么:这可以更有效地实施吗?

我怀疑有一种方法可以将此表示为单set_subtensor()次调用,这可能会更快,但我不知道如何让张量索引正确播放。

更新:我想通过平局的张量,在一次通话中做到这一点:

def upsamplecode2(encoded, auxpos):
    shp = encoded.shape
    upsampled = T.zeros((shp[0], shp[1], shp[2] * upsampfactor))

    add_to_flattened_indices = theano.shared(np.array([ [[(y + z * numfilters) * numsamples * upsampfactor for x in range(numsamples)] for y in range(numfilters)] for z in range(minibatchsize)], dtype=theano.config.floatX).flatten(), name="add_to_flattened_indices")

    upsampled = T.set_subtensor(upsampled.flatten()[T.cast(auxpos.flatten() + add_to_flattened_indices, 'int32')], encoded.flatten()).reshape(upsampled.shape)

    return upsampled


get_upsampled2 = theano.function([], upsamplecode2(code, auxpos))
print "the un-pooled data v2:"
ups2 = get_upsampled2()
print ups2

但是,这仍然不是很好的效率,因为当我运行它(添加到上面的脚本的末尾)时,我发现Cuda库目前无法有效地执行整数索引操作:

ERROR (theano.gof.opt): Optimization failure due to: local_gpu_advanced_incsubtensor1
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/theano/gof/opt.py", line 1493, in process_node
    replacements = lopt.transform(node)
  File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/opt.py", line 952, in local_gpu_advanced_incsubtensor1
    gpu_y = gpu_from_host(y)
  File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 507, in __call__
    node = self.make_node(*inputs, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/basic_ops.py", line 133, in make_node
    dtype=x.dtype)()])
  File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/type.py", line 69, in __init__
    (self.__class__.__name__, dtype, name))
TypeError: CudaNdarrayType only supports dtype float32 for now. Tried using dtype int64 for variable None

1 个答案:

答案 0 :(得分:0)

我不知道这是否更快,但可能更简洁一些。看看它对你的情况是否有用。

import numpy as np
import theano
import theano.tensor as T

minibatchsize = 2
numfilters = 3
numsamples = 4
upsampfactor = 5

totalitems = minibatchsize * numfilters * numsamples

code = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples))

auxpos = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples)) % upsampfactor 
auxpos += (np.arange(4) * 5).reshape((1,1,-1))

# first in numpy
shp = code.shape
upsampled_np = np.zeros((shp[0], shp[1], shp[2] * upsampfactor))
upsampled_np[np.arange(shp[0]).reshape(-1, 1, 1), np.arange(shp[1]).reshape(1, -1, 1), auxpos] = code

print "numpy output:"
print upsampled_np

# now the same idea in theano
encoded = T.tensor3()
positions = T.tensor3(dtype='int64')
shp = encoded.shape
upsampled = T.zeros((shp[0], shp[1], shp[2] * upsampfactor))
upsampled = T.set_subtensor(upsampled[T.arange(shp[0]).reshape((-1, 1, 1)), T.arange(shp[1]).reshape((1, -1, 1)), positions], encoded)

print "theano output:"
print upsampled.eval({encoded: code, positions: auxpos})