具有批次大小的Char RNN分类

时间:2020-09-28 08:16:44

标签: python machine-learning pytorch classification recurrent-neural-network

我正在使用 Pytorch char-rnn 复制this example以进行分类

for iter in range(1, n_iters + 1):
    category, line, category_tensor, line_tensor = randomTrainingExample()
    output, loss = train(category_tensor, line_tensor)
    current_loss += loss

我看到每个时代只有一个例子是随机的。我希望每个纪元所有数据集都采用特定的批量大小示例。我可以自行调整代码来执行此操作,但我想知道是否已经存在一些标志。

谢谢

1 个答案:

答案 0 :(得分:1)

如果您通过继承PyTorch Dataset class来构造数据集类,然后将其输入PyTorch DataLoader class中,则可以设置参数batch_size以确定要获得多少示例在训练循环的每次迭代中。

我遵循与您相同的教程。我可以向您展示如何使用上面的PyTorch类批量获取数据。

# load data into a DataFrame using the findFiles function as in the tutorial
files = findFiles('data/names') # load the files as in the tutorial into a dataframe 
df_names = pd.concat([
    pd.read_table(f, names = ["names"], header = None)\
      .assign(lang = f.stem)\
    for f in files]).reset_index(drop = True)
print(df_names.head())

# output: 
#      names      lang
# 0      Abe  Japanese
# 1  Abukara  Japanese
# 2   Adachi  Japanese
# 3     Aida  Japanese
# 4   Aihara  Japanese

# Make train and test data 
from sklearn.model_selection import train_test_split
X_train, X_dev, y_train, y_dev = train_test_split(df_names.names, df_names.lang,
                                                   train_size = 0.8)
df_train = pd.concat([X_train, y_train], axis=1)
df_val = pd.concat([X_dev, y_dev], axis=1)

现在,我通过继承PyTorch Dataset类,使用上述数据框构造一个修改后的Dataset类。

import torch
from torch.utils.data import Dataset, DataLoader
class NameDatasetReader(Dataset):
    def __init__(self, df: pd.DataFrame):
        self.df = df      
        
    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx: int):
        row = self.df.loc[idx] # gets a row from the df 
        input_name = list(row.names) # turns name into a list of chars 
        len_name = len(input_name) # length of name (used to pad packed sequence)
        labels = row.label # target 
        return input_name, len_name, labels
train_dat =  NameDatasetReader(df_train) # make dataset from dataframe with training data 

现在,问题是,当您要使用批处理和序列时,您需要每个批处理中的序列长度相等。这就是为什么我还在上面的__getitem__()函数中从数据帧中获取提取名称的长度的原因。该功能将用于修改每批中使用的训练示例的功能。

这称为collat​​e_batch函数,在此示例中,它修改了每批训练数据,以使给定批次中的序列长度相等。

# Dictionary of all letters (as in the original tutorial,
#  I have just inserted also an entry for the padding token)
all_letters_dict= dict(zip(all_letters, range(1, len(all_letters) +2)))
all_letters_dict['<PAD>'] = 0

# function to turn name into a tensor 
def line_to_tensor(line):
    """turns name into a tensor of one hot encoded vectors"""
    tensor = torch.zeros(len(line),
                         len(all_letters_dict.keys())) # (name_len x vocab_size) - <PAD> is part of vocab
    for li, letter in enumerate(line):
        tensor[li][all_letters_dict[letter]] = 1
    return tensor

def collate_batch_lstm(input_data: Tuple) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
    """
    Combines multiple name samples into a single batch
    :param input_data: The combined input_ids, seq_lens, and labels for the batch
    :return: A tuple of tensors (input_ids, seq_lens, labels)
    """
    
    # loops over batch input and extracts vals 
    names = [i[0] for i in input_data] 
    seq_names_len = [i[1] for i in input_data] 
    labels = [i[2] for i in input_data] 

    max_length = max(seq_names_len) # longest sequence aka. name 

    # Pad all of the input samples to the max length 
    names = [(name + ["<PAD>"] * (max_length - len(name))) for name in names]  
    
    input_ids = [line_to_tensor(name) for name in names] # turn each list of chars into a tensor with one hot vecs
    
    # Make sure each sample is max_length long
    assert (all(len(i) == max_length for i in input_ids))
    return torch.stack(input_ids), torch.tensor(seq_names_len), torch.tensor(labels) 

现在,我可以通过从上方插入数据集对象,在上方插入collat​​e_batch_lstm()函数以及给定的batch_size到DataLoader类中来构造数据加载器。

train_dat_loader = DataLoader(train_dat, batch_size = 4, collate_fn = collate_batch_lstm)

您现在可以遍历train_dat_loader,这将返回一个训练批处理,每个迭代中有4个名称。

考虑来自train_dat_loader的给定批次:

seq_tensor, seq_lengths, labels = iter(train_dat_loader).next()
print(seq_tensor.shape, seq_lengths.shape, labels.shape)
print(seq_tensor)
print(seq_lengths)
print(labels)
# output: 
# torch.Size([4, 11, 59]) torch.Size([4]) torch.Size([4])
# tensor([[[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.]],

#         [[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.]],

#         [[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.]],

#         [[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.]]])
# tensor([11,  3,  8,  7])
# tensor([14,  1, 14,  2])

它给我们一个张量大小(4 x 11 x 59)。 4,因为我们指定的批次大小为4。 11是给定批处理中最长名称的长度(所有其他名称都用零填充,以使它们具有相等的长度)。 59是我们词汇表中的字符数。

接下来的事情是将其合并到您的训练例程中,并使用packing routine来避免对用数据填充数据的零进行冗余计算:)