PyTorch中是否有将卷积转换为完全连接的网络形式的功能?

时间:2019-06-21 11:48:51

标签: neural-network conv-neural-network pytorch

我正在尝试将卷积层转换为完全连接的层。

例如,有一个3×3输入和2x2内核的示例:

input and kernel

等效于向量矩阵乘法,

vector-matrix multiplication

PyTorch中是否有获取矩阵B的函数?

3 个答案:

答案 0 :(得分:2)

我只能部分回答您的问题:

在上面的示例中,您将内核写为矩阵,将输入写为向量。如果可以将输入写为矩阵,可以使用torch.nn.Unfold来显式计算documentation中的卷积:

# Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape)
inp = torch.randn(1, 3, 10, 12)
w = torch.randn(2, 3, 4, 5)
inp_unf = torch.nn.functional.unfold(inp, (4, 5))
out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)
out = out_unf.view(1, 2, 7, 8)
(torch.nn.functional.conv2d(inp, w) - out).abs().max()
# tensor(1.9073e-06)

但是,如果您需要计算内核矩阵(较小的矩阵),则可以使用此函数,该函数基于Warren Weckessers answer

def toeplitz_1_ch(kernel, input_size):
    # shapes
    k_h, k_w = kernel.shape
    i_h, i_w = input_size
    o_h, o_w = i_h-k_h+1, i_w-k_w+1

    # construct 1d conv toeplitz matrices for each row of the kernel
    toeplitz = []
    for r in range(k_h):
        toeplitz.append(linalg.toeplitz(c=(kernel[r,0], *np.zeros(i_w-k_w)), r=(*kernel[r], *np.zeros(i_w-k_w))) ) 

    # construct toeplitz matrix of toeplitz matrices (just for padding=0)
    h_blocks, w_blocks = o_h, i_h
    h_block, w_block = toeplitz[0].shape

    W_conv = np.zeros((h_blocks, h_block, w_blocks, w_block))

    for i, B in enumerate(toeplitz):
        for j in range(o_h):
            W_conv[j, :, i+j, :] = B

    W_conv.shape = (h_blocks*h_block, w_blocks*w_block)

    return W_conv

不在pytorch中,而是在numpy中。这是针对padding = 0的,但是可以通过更改h_blocksw_blocksW_conv[i+j, :, j, :]来轻松地进行调整。

更新:多个输出通道只是这些矩阵的倍数,因为每个输出都有自己的内核。多个输入通道还具有自己的内核-和自己的矩阵-在卷积后对它们进行平均。可以实现如下:

def conv2d_toeplitz(kernel, input):
    """Compute 2d convolution over multiple channels via toeplitz matrix
    Args:
        kernel: shape=(n_out, n_in, H_k, W_k)
        input: shape=(n_in, H_i, W_i)"""

    kernel_size = kernel.shape
    input_size = input.shape
    output_size = (kernel_size[0], input_size[1] - (kernel_size[1]-1), input_size[2] - (kernel_size[2]-1))
    output = np.zeros(output_size)

    for i,ks in enumerate(kernel):  # loop over output channel
        for j,k in enumerate(ks):  # loop over input channel
            T_k = toeplitz_1_ch(k, input_size[1:])
            output[i] += T_k.dot(input[j].flatten()).reshape(output_size[1:])  # sum over input channels

    return output

要检查正确性:

k = np.random.randn(4*3*3*3).reshape((4,3,3,3))
i = np.random.randn(3,7,9)

out = conv2d_toeplitz(k, i)

# check correctness of convolution via toeplitz matrix
print(np.sum((out - F.conv2d(torch.tensor(i).view(1,3,7,9), torch.tensor(k)).numpy())**2))

>>> 1.0063523219807736e-28 

更新2:

也可以在不循环进入一个矩阵的情况下执行此操作:

def toeplitz_mult_ch(kernel, input_size):
    """Compute toeplitz matrix for 2d conv with multiple in and out channels.
    Args:
        kernel: shape=(n_out, n_in, H_k, W_k)
        input_size: (n_in, H_i, W_i)"""

    kernel_size = kernel.shape
    output_size = (kernel_size[0], input_size[1] - (kernel_size[1]-1), input_size[2] - (kernel_size[2]-1))
     T = np.zeros((output_size[0], int(np.prod(output_size[1:])), input_size[0], int(np.prod(input_size[1:]))))

    for i,ks in enumerate(kernel):  # loop over output channel
        for j,k in enumerate(ks):  # loop over input channel
            T_k = toeplitz_1_ch(k, input_size[1:])
            T[i, :, j, :] = T_k

    T.shape = (np.prod(output_size), np.prod(input_size))

    return T

必须将输入弄平,并在乘法后重新调整输出的形状。 检查正确性(使用与上面相同的ik):

T = toeplitz_mult_ch(k, i.shape)
out = T.dot(i.flatten()).reshape((1,4,5,7))

# check correctness of convolution via toeplitz matrix
print(np.sum((out - F.conv2d(torch.tensor(i).view(1,3,7,9), torch.tensor(k)).numpy())**2))
>>> 1.5486060830252635e-28

答案 1 :(得分:1)

您可以将我的代码用于带圆形填充的卷积:

import numpy as np
import scipy.linalg as linalg

def toeplitz_1d(k, x_size):
    k_size = k.size
    r = *k[(k_size // 2):], *np.zeros(x_size - k_size), *k[:(k_size // 2)]
    c = *np.flip(k)[(k_size // 2):], *np.zeros(x_size - k_size), *np.flip(k)[:(k_size // 2)]
    t = linalg.toeplitz(c=c, r=r)
    return t

def toeplitz_2d(k, x_size):
    k_h, k_w = k.shape
    i_h, i_w = x_size

    ks = np.zeros((i_w, i_h * i_w))
    for i in range(k_h):
        ks[:, i*i_w:(i+1)*i_w] = toeplitz_1d(k[i], i_w)
    ks = np.roll(ks, -i_w, 1)

    t = np.zeros((i_h * i_w, i_h * i_w))
    for i in range(i_h):
        t[i*i_h:(i+1)*i_h,:] = ks
        ks = np.roll(ks, i_w, 1)
    return t

def toeplitz_3d(k, x_size):
    k_oc, k_ic, k_h, k_w = k.shape
    i_c, i_h, i_w = x_size

    t = np.zeros((k_oc * i_h * i_w, i_c * i_h * i_w))

    for o in range(k_oc):
        for i in range(k_ic):
            t[(o * (i_h * i_w)):((o+1) * (i_h * i_w)), (i * (i_h * i_w)):((i+1) * (i_h * i_w))] = toeplitz_2d(k[o, i], (i_h, i_w))

    return t

if __name__ == "__main__":
    import torch
    k = np.random.randint(50, size=(3, 2, 3, 3))
    x = np.random.randint(50, size=(2, 5, 5))
    t = toeplitz_3d(k, x.shape)
    y = t.dot(x.flatten()).reshape(3, 5, 5)
    xx = torch.nn.functional.pad(torch.from_numpy(x.reshape(1, 2, 5, 5)), pad=(1, 1, 1, 1), mode='circular')
    yy = torch.conv2d(xx, torch.from_numpy(k))
    err = ((y - yy.numpy()) ** 2).sum()
    print(err)

答案 2 :(得分:0)

import torch


dim1 = 5
dim2 = dim1
x = torch.randn(dim1, dim2).reshape(1, -1)
kernel = torch.arange(0, 9).reshape(3, 3)
flat_k = torch.zeros(dim1 * dim2)

for i in range(len(kernel)):
    flat_k[i * dim1:i * dim1 + kernel.shape[0]] = kernel[i]

现在flat_k是Toeplitz矩阵的第一列,因此您可以使用scipy.linalg.toeplitz

from scipy.linalg import toeplitz

k = toeplitz(flat_k.numpy())

或者,如果您希望能够反向传播它,可以查看source code of the toeplitz function in scipy并在Pytorch中实现等效功能。