哪些问题可能导致ConvolutionND导致CuDNNError

时间:2018-07-19 05:41:53

标签: chainer

我在链中使用三维卷积链接(带有ConvolutionND)。

前向计算运行顺利(我检查了中间结果形状以确保我正确理解了convolution_nd参数的含义),但是在向后计算期间,CuDNNError与消息CUDNN_STATUS_NOT_SUPPORTED一起出现。

ConvolutionND的cover_all参数作为其默认值False,所以从文档中我看不出是什么可能导致错误。

这就是我定义卷积层之一的方式:

self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(self.GPU_1_ID)

调用堆栈为

File "chainer/function_node.py", line 548, in backward_accumulate
    gxs = self.backward(target_input_indexes, grad_outputs)
File "chainer/functions/connection/convolution_nd.py", line 118, in backward
    gy, W, stride=self.stride, pad=self.pad, outsize=x_shape)
File "chainer/functions/connection/deconvolution_nd.py", line 310, in deconvolution_nd
    y, = func.apply(args)
File chainer/function_node.py", line 258, in apply
    outputs = self.forward(in_data)
File "chainer/functions/connection/deconvolution_nd.py", line 128, in forward
    return self._forward_cudnn(x, W, b)
File "chainer/functions/connection/deconvolution_nd.py", line 105, in _forward_cudnn
    tensor_core=tensor_core)
File "cupy/cudnn.pyx", line 881, in cupy.cudnn.convolution_backward_data
File "cupy/cuda/cudnn.pyx", line 975, in cupy.cuda.cudnn.convolutionBackwardData_v3
File "cupy/cuda/cudnn.pyx", line 461, in cupy.cuda.cudnn.check_status
cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_NOT_SUPPORTED

使用ConvolutionND时需要注意一些特殊之处吗?

例如失败的代码:

import chainer
from chainer import functions as F
from chainer import links as L
from chainer.backends import cuda

import numpy as np
import cupy as cp

chainer.global_config.cudnn_deterministic = False

NB_MASKS = 60
NB_FCN = 3
NB_CLASS = 17

class MFEChain(chainer.Chain):
    """docstring for Wavelphasenet."""
    def __init__(self,
                 FCN_Dim,
                 gpu_ids=None):
        super(MFEChain, self).__init__()

        self.GPU_0_ID, self.GPU_1_ID = (0, 1) if gpu_ids is None else gpu_ids
        with self.init_scope():
            self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(
                self.GPU_1_ID
            )

    def __call__(self, inputs):
        ### Pad input ###
        processed_sequences = []
        for convolved in inputs:
            ## Transform to sequences)
            copy = convolved if self.GPU_0_ID == self.GPU_1_ID else F.copy(convolved, self.GPU_1_ID)
            processed_sequences.append(copy)

        reprocessed_sequences = []
        with cuda.get_device(self.GPU_1_ID):
            for convolved in processed_sequences:
                convolved = F.expand_dims(convolved, 0)
                convolved = F.expand_dims(convolved, 0)
                convolved = self.conv1(convolved)

                reprocessed_sequences.append(convolved)

            states = F.vstack(reprocessed_sequences)

            logits = states

            ret_logits = logits if self.GPU_0_ID == self.GPU_1_ID else F.copy(logits, self.GPU_0_ID)
        return ret_logits

def mfe_test():
    mfe = MFEChain(150)
    inputs = list(
        chainer.Variable(
            cp.random.randn(
                NB_MASKS,
                11,
                in_len,
                dtype=cp.float32
            )
        ) for in_len in [53248]
    )
    val = mfe(inputs)
    grad = cp.ones(val.shape, dtype=cp.float32)
    val.grad = grad
    val.backward()
    for i in inputs:
        print(i.grad)

if __name__ == "__main__":
    mfe_test()

1 个答案:

答案 0 :(得分:1)

cupy.cuda.cudnn.convolutionBackwardData_v3与某些特定参数不兼容,如an issue in official github中所述。

不幸的是,该问题仅涉及deconvolution_2d.py(而不是deconvolution_nd.py),因此,我猜关于您是否使用cudnn的决策失败。

您可以通过确认来检查参数

  1. 检查是否将膨胀参数(!= 1)或组参数(!= 1)传递给卷积。
  2. 打印chainer.config.cudnn_deterministic,configuration.config.autotune和configuration.config.use_cudnn_tensor_core。

可以通过在官方github中提出问题来获得更多支持。

您显示的代码非常复杂。

为澄清问题,下面的代码会有所帮助。

from chainer import Variable, Chain
from chainer import links as L
from chainer import functions as F

import numpy as np
from six import print_

batch_size = 1
in_channel = 1
out_channel = 1

class MyLink(Chain):
    def __init__(self):
        super(MyLink, self).__init__()
        with self.init_scope():
            self.conv = L.ConvolutionND(3, 1, 1, (3, 3, 3), nobias=True, initialW=np.ones((in_channel, out_channel, 3, 3, 3)))

    def __call__(self, x):
        return F.sum(self.conv(x))

if __name__ == "__main__":
    my_link = MyLink()
    my_link.to_gpu(0)
    batch = Variable(np.ones((batch_size, in_channel, 3, 3, 3)))
    batch.to_gpu(0)
    loss = my_link(batch)
    loss.backward()
    print_(batch.grad)