RuntimeError:给定输入大小:(256x1x38)。计算的输出大小:(256x0x1)。输出尺寸太小

时间:2020-02-03 19:01:35

标签: python neural-network runtime-error gpu pytorch

我正在尝试运行通过Pytorch实现的神经引用网络模型(模型:https://github.com/timoklein/neural_citation)。

此github提供了用于理解的ipython笔记本示例。 (https://github.com/timoklein/neural_citation/blob/master/NCN_training.ipynb

但是,笔记本中的代码显示了运行时错误:

Running on: cuda
Number of model parameters: 24,341,796
Encoders: # Filters = 256, Context filter length = [4, 4, 5, 6, 7],  Context filter length = [1, 2]
Embeddings: Dimension = 128, Pad index = 1, Context vocab = 20002, Author vocab = 20002, Title vocab = 20004
Decoder: # GRU cells = 1, Hidden size = 256
Parameters: Dropout = 0.2, Show attention = False
-------------------------------------------------
TRAINING SETTINGS
Seed = 34, # Epochs = 20, Batch size = 64, Initial lr = 0.001
HBox(children=(IntProgress(value=0, description='Epochs', max=20, style=ProgressStyle(description_width='initial')), HTML(value='')))
HBox(children=(IntProgress(value=0, description='Training batches', max=6280, style=ProgressStyle(description_width='initial')), HTML(value='')))

Traceback (most recent call last):
  File "/tmp/pycharm_project_813/ncn_training.py", line 54, in <module>
    model_name = "embed_128_hid_256_1_GRU")
  File "/tmp/pycharm_project_813/{PRJT_NAME}/training.py", line 225, in train_model
    train_loss = train(model, train_iterator, optimizer, criterion, clip)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/training.py", line 106, in train
    output = model(context = cntxt, title = ttl, authors_citing = citing, authors_cited = cited)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 486, in forward
    encoder_outputs = self.encoder(context, authors_citing, authors_cited)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 185, in forward
    context = self.context_encoder(context)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 105, in forward
    x = [encoder(x) for encoder in self.encoder]
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 105, in <listcomp>
    x = [encoder(x) for encoder in self.encoder]
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 61, in forward
    x = F.max_pool2d(x, kernel_size=pool_size)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/_jit_internal.py", line 134, in fn
    return if_false(*args, **kwargs)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/functional.py", line 487, in _max_pool2d
    input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (256x1x38). Calculated output size: (256x0x1). Output size is too small

Process finished with exit code 1

我搜索了错误并找到了几种解决方案,但找不到能解决我问题的解决方案。

我已经更改了填充大小,输入大小和层数,但是没有一个起作用。 我该怎么办?

感谢您的帮助!


ipython笔记本代码

from ncn.model import *
from ncn.training import *

random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True

# set up training
data = get_bucketized_iterators("/home/jupyter/tutorials/seminar_kd/arxiv_data.csv",
                                batch_size = 64,
                                len_context_vocab = 20000,
                                len_title_vocab = 20000,
                                len_aut_vocab = 20000)
PAD_IDX = data.ttl.vocab.stoi['<pad>']
cntxt_vocab_len = len(data.cntxt.vocab)
aut_vocab_len = len(data.aut.vocab)
ttl_vocab_len = len(data.ttl.vocab)

net = NeuralCitationNetwork(context_filters=[4,4,5,6,7],
                            author_filters=[1,2],
                            context_vocab_size=cntxt_vocab_len,
                            title_vocab_size=ttl_vocab_len,
                            author_vocab_size=aut_vocab_len,
                            pad_idx=PAD_IDX,
                            num_filters=256,
                            authors=True, 
                            embed_size=128,
                            num_layers=1,
                            hidden_size=256,
                            dropout_p=0.2,
                            show_attention=False)
net.to(DEVICE)

train_losses, valid_losses = train_model(model = net, 
                                         train_iterator = data.train_iter, 
                                         valid_iterator = data.valid_iter,
                                         lr = 0.001,
                                         pad = PAD_IDX,
                                         model_name = "embed_128_hid_256_1_GRU")

0 个答案:

没有答案