我的模型中的一行tr.nn.Linear(hw_flat * num_filters*8, num_fc)
在模型初始化时引起OOM错误。注释掉它可以消除内存问题。
import torch as tr
from layers import Conv2dSame, Flatten
class Discriminator(tr.nn.Module):
def __init__(self, cfg):
super(Discriminator, self).__init__()
num_filters = 64
hw_flat = int(cfg.hr_resolution[0] / 2**4)**2
num_fc = 1024
self.model = tr.nn.Sequential(
# Channels in, channels out, filter size, stride, padding
Conv2dSame(cfg.num_channels, num_filters, 3),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters, num_filters, 3, 2),
tr.nn.BatchNorm2d(num_filters),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters, num_filters*2, 3),
tr.nn.BatchNorm2d(num_filters*2),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*2, num_filters*2, 3, 2),
tr.nn.BatchNorm2d(num_filters*2),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*2, num_filters*4, 3),
tr.nn.BatchNorm2d(num_filters*4),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*4, num_filters*4, 3, 2),
tr.nn.BatchNorm2d(num_filters*4),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*4, num_filters*8, 3),
tr.nn.BatchNorm2d(num_filters*8),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*8, num_filters*8, 3, 2),
tr.nn.BatchNorm2d(num_filters*8),
tr.nn.LeakyReLU(),
Flatten(),
tr.nn.Linear(hw_flat * num_filters*8, num_fc),
tr.nn.LeakyReLU(),
tr.nn.Linear(num_fc, 1),
tr.nn.Sigmoid()
)
self.model.apply(self.init_weights)
def forward(self, x_in):
x_out = self.model(x_in)
return x_out
def init_weights(self, layer):
if type(layer) in [tr.nn.Conv2d, tr.nn.Linear]:
tr.nn.init.xavier_uniform_(layer.weight)
这很奇怪,因为hw_flat = 96 * 96 = 9216,并且num_filters * 8 = 512,所以hw_flat * num_filters = 4718592,这是该层中的参数数。我已经确认此计算是因为将图层更改为tr.nn.Linear(4718592, num_fc)
会产生相同的输出。
对我来说,这没有意义,因为dtype = float32,因此预期的大小为32 * 4718592 = 150,994,944字节。这相当于大约150mb。
错误消息是:
Traceback (most recent call last):
File "main.py", line 116, in <module>
main()
File "main.py", line 112, in main
srgan = SRGAN(cfg)
File "main.py", line 25, in __init__
self.discriminator = Discriminator(cfg).to(device)
File "/home/jpatts/Documents/ECE/ECE471-SRGAN/models.py", line 87, in __init__
tr.nn.Linear(hw_flat * num_filters*8, num_fc),
File "/home/jpatts/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 51, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
RuntimeError: $ Torch: not enough memory: you tried to allocate 18GB. Buy new RAM! at /pytorch/aten/src/TH/THGeneral.cpp:201
我也只运行1的批处理大小(不会影响此错误),网络的整体输入形状为(1、3、1536、1536),而平坦层之后的形状为(1、47415592) )。
为什么会这样?
答案 0 :(得分:1)
您的线性层很大-实际上,它至少需要18GB的内存。 (您的估算不正确有两个原因:(1)float32
占用4个字节的内存,而不是32个字节;(2)您没有乘以输出大小。)
请勿使用太大的线性图层。线性图层
nn.Linear(m, n)
使用O(n*m)
内存:也就是说,权重的内存要求与 功能数量。用这种方式很容易就能穿透内存(并且 请记住,您至少需要重量的两倍,因为您还需要 存储渐变。)