我已经将大小为torch.Size([3, 28, 28])
的pytorch张量转换为大小为(28, 28, 3)
的numpy数组,并且似乎没有任何问题。然后,我尝试使用img = Image.fromarray(img.astype('uint8'), mode='RGB')
将此图像转换为PIL图像,但是返回的img
的尺寸为(28, 28)
(当我希望它是(28, 28, 3)
(或{{ 1}})。我不明白为什么会这样。我确保转换为uint8并使用RGB模式,就像其他海报在网上建议的那样,但是这些海报(也不使用np.ascontiguousarray)都没有帮助。
PIL版本1.1.7
(3, 28, 28)
编辑:这是一个最小的示例。如果有任何帮助,我将保留上述内容。
# This code implements the __getitem__ function for a child class of datasets.MNIST in pytorch
# https://pytorch.org/docs/stable/_modules/torchvision/datasets/mnist.html#MNIST
img, label = self.data[index], self.targets[index]
assert img.shape == (3, 28, 28), \
(f'[Before PIL] Incorrect image shape: expecting (3, 28, 28),'
f'received {img.shape}')
print('Before reshape:', img.shape) # torch.Size([3, 28, 28])
img = img.numpy().reshape(3, 28, 28)
img = np.stack([img[0,:,:], img[1,:,:], img[2,:,:]], axis=2)
print('After reshape:', img.shape) # (28, 28, 3)
# doing this so that it is consistent with all other datasets
# to return a PIL Image
img = Image.fromarray(img.astype('uint8'), mode='RGB') # Returns 28 x 28 image
assert img.size == (3, 28, 28), \
(f'[Before Transform] Incorrect image shape: expecting (3, 28, 28), '
f'received {img.size}')
from PIL import Image
import numpy as np
img = np.random.randn(28, 28, 3)
img = Image.fromarray(img.astype('uint8'), mode='RGB') # Returns 28 x 28 image
assert img.size == (28, 28, 3), \
(f'[Before Transform] Incorrect image shape: expecting (3, 28, 28), '
f'received {img.size}')