在 PyTorch 中定义批量大小为 1 的手动排序的 MNIST 数据集

时间:2021-03-18 16:35:42

标签: pytorch mnist batchsize pytorch-dataloader

[] :这表示一个批次。例如,如果批次大小为 5,则批次将类似于 [1,4,7,4,2]。 []的长度表示批量大小。

我想让训练集看起来像这样:

[1] -> [1] -> [1] -> [1] -> [1] -> [7] -> [7] -> [7] -> [7] -> [7] ] -> [3] -> [3] -> [3] -> [3] -> [3] -> ... 以此类推

这意味着首先五个 1(批量大小 = 1),第二个 5 个 7(批量大小 = 1),第三个五个 3(批量大小 = 1)等等......

有人可以给我一个想法吗?

如果有人能解释如何用代码来实现这一点会很有帮助。

谢谢! :)

2 个答案:

答案 0 :(得分:1)

如果您想要一个 DataLoader,您只想为每个样本定义类标签,那么您可以使用 torch.data.utils.Subset 类。尽管它的名字,它不一定需要定义数据集的子集。例如

import torch
import torchvision
import torchvision.transforms as T
from itertools import cycle

mnist = torchvision.datasets.MNIST(root='./', train=True, transform=T.ToTensor())

# not sure what "...and so on" implies, but define this list however you like
target_classes = [1, 1, 1, 1, 1, 7, 7, 7, 7, 7, 3, 3, 3, 3, 3]

# create cyclic iterators of indices for each class in MNIST
indices = dict()
for label in torch.unique(mnist.targets).tolist():
    indices[label] = cycle(torch.nonzero(mnist.targets == label).flatten().tolist())

# define the order of indices in the new mnist subset based on target_classes
new_indices = []
for t in target_classes:
    new_indices.append(next(indices[t]))

# create a Subset of MNIST based on new_indices
mnist_modified = torch.utils.data.Subset(mnist, new_indices)
dataloader = torch.utils.data.DataLoader(mnist_modified, batch_size=1, shuffle=False)

for idx, (x, y) in enumerate(dataloader):
    # training loop
    print(f'Batch {idx+1} labels: {y.tolist()}')

答案 1 :(得分:1)

如果您想要一个 DataLoader 在同一类的一行中返回五个样本,但您不想手动为每个索引定义类,那么您可以创建一个自定义采样器。例如

import torch
import torchvision
import torchvision.transforms as T
from itertools import cycle

class RepeatClassSampler(torch.utils.data.Sampler):
    def __init__(self, targets, repeat_count, length, shuffle=False):
        if not torch.is_tensor(targets):
            targets = torch.tensor(targets)

        self.targets = targets
        self.repeat_count = repeat_count
        self.length = length
        self.shuffle = shuffle

        self.classes = torch.unique(targets).tolist()
        self.class_indices = dict()
        for label in self.classes:
            self.class_indices[label] = torch.nonzero(targets == label).flatten() 

    def __iter__(self):
        class_index_iters = dict()
        for label in self.classes:
            if self.shuffle:
                class_index_iters[label] = cycle(self.class_indices[label][torch.randperm(len(self.class_indices))].tolist())
            else:
                class_index_iters[label] = cycle(self.class_indices[label].tolist())

        if self.shuffle:
            target_iter = cycle(self.targets[torch.randperm(len(self.targets))].tolist())
        else:
            target_iter = cycle(self.targets.tolist())

        def index_generator():
            for i in range(self.length):
                if i % self.repeat_count == 0:
                    current_class = next(target_iter)
                yield next(class_index_iters[current_class])
    
        return index_generator()

    def __len__(self):
        return self.length


mnist = torchvision.datasets.MNIST(root='./', train=True, transform=T.ToTensor())
dataloader = torch.utils.data.DataLoader(
        mnist,
        batch_size=1,
        sampler=RepeatClassSampler(
            targets=mnist.targets,
            repeat_count=5,
            length=15,      # How many total to pick from your dataset
            shuffle=True))

for idx, (x, y) in enumerate(dataloader):
    # training loop
    print(f'Batch {idx+1} labels: {y.tolist()}')
相关问题