TypeError:类型为“ numpy.int64”的对象没有len()

时间:2018-12-24 18:13:45

标签: python pandas numpy dataset pytorch

我正在DataLoader中的DataSet中制作一个PyTorch

从将所有dtype作为DataFrame的{​​{1}}开始加载

np.float64

这是我的数据集类。

result = pd.read_csv('dummy.csv', header=0, dtype=DTYPE_CLEANED_DF)

准备from torch.utils.data import Dataset, DataLoader class MyDataset(Dataset): def __init__(self, result): headers = list(result) headers.remove('classes') self.x_data = result[headers] self.y_data = result['classes'] self.len = self.x_data.shape[0] def __getitem__(self, index): x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float) y = torch.tensor(self.y_data.iloc[index], dtype=torch.float) return (x, y) def __len__(self): return self.len

train_loader and test_loader

这是我的train_size = int(0.5 * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) train_loader = DataLoader(dataset=train_dataset, batch_size=16, shuffle=True, num_workers=1) test_loader = DataLoader(dataset=train_dataset) file

当我尝试遍历csv时。会引发错误

train_loader

相关问题:
https://github.com/pytorch/pytorch/issues/10165
https://github.com/pytorch/pytorch/pull/9237
https://github.com/pandas-dev/pandas/issues/21946

问题:
如何在此处解决for i , (data, target) in enumerate(train_loader): print(i) TypeError Traceback (most recent call last) <ipython-input-32-0b4921c3fe8c> in <module> ----> 1 for i , (data, target) in enumerate(train_loader): 2 print(i) /opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self) 635 self.reorder_dict[idx] = batch 636 continue --> 637 return self._process_next_batch(batch) 638 639 next = __next__ # Python 2 compatibility /opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch) 656 self._put_indices() 657 if isinstance(batch, ExceptionWrapper): --> 658 raise batch.exc_type(batch.exc_msg) 659 return batch 660 TypeError: Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 103, in __getitem__ return self.dataset[self.indices[idx]] File "<ipython-input-27-107e03bc3c6a>", line 12, in __getitem__ x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 1478, in __getitem__ return self._getitem_axis(maybe_callable, axis=axis) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 2091, in _getitem_axis return self._get_list_axis(key, axis=axis) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 2070, in _get_list_axis return self.obj._take(key, axis=axis) File "/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py", line 2789, in _take verify=True) File "/opt/conda/lib/python3.6/site-packages/pandas/core/internals.py", line 4537, in take new_labels = self.axes[axis].take(indexer) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 2195, in take return self._shallow_copy(taken) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/range.py", line 267, in _shallow_copy return self._int64index._shallow_copy(values, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/numeric.py", line 68, in _shallow_copy return self._shallow_copy_with_infer(values=values, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 538, in _shallow_copy_with_infer if not len(values) and 'dtype' not in kwargs: TypeError: object of type 'numpy.int64' has no len() 问题?

6 个答案:

答案 0 :(得分:3)

我认为问题在于,在使用random_split之后,index现在是torch.Tensor而不是int。我发现向__getitem__添加快速类型检查,然后在张量上使用.item()对我来说很有效:

def __getitem__(self, index):

    if type(index) == torch.Tensor:
        index = index.item()

    x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float)
    y = torch.tensor(self.y_data.iloc[index], dtype=torch.float)
    return (x, y)

来源:https://discuss.pytorch.org/t/issues-with-torch-utils-data-random-split/22298/8

答案 1 :(得分:1)

参考:
https://github.com/pytorch/pytorch/issues/9211

只需在.tolist()行中添加indices

def random_split(dataset, lengths):
    """
    Randomly split a dataset into non-overlapping new datasets of given lengths.
    Arguments:
        dataset (Dataset): Dataset to be split
        lengths (sequence): lengths of splits to be produced
    """
    if sum(lengths) != len(dataset):
        raise ValueError("Sum of input lengths does not equal the length of the input dataset!")

    indices = randperm(sum(lengths)).tolist()
    return [Subset(dataset, indices[offset - length:offset]) for offset, length in zip(_accumulate(lengths), lengths)]

答案 2 :(得分:0)

为什么不简单尝试:

self.len = len(self.x_data)

lenpandas DataFrame无须转换为数组或张量即可正常工作。

答案 3 :(得分:0)

我通过将PyTorch版本升级到1.3版解决了该问题。

https://pytorch.org/get-started/locally/

答案 4 :(得分:0)

我总共有2298张图像。所以如果我按照以下方式做

[int(len(data)*0.8),int(len(data)*0.2)]

它抛出有问题的错误。 为

[int(len(data)*0.8)+int(len(data)*0.2)]=2297

所以我要做的是floorceil函数

[int(np.floor(len(data)*0.8)),int(np.ceil(len(data)*0.2))])

结果是2298,错误消失了

答案 5 :(得分:0)

在我的脚本中,我首先通过 dataset = TensorDataset(data_x, data_y) 创建一个 Tensordataset,然后使用 train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])。这不会在以后的训练迭代中造成问题。