cifar10随机训练和测试集

时间:2018-12-11 19:33:58

标签: python numpy random keras numpy-ndarray

我想将 keras.datasets 库中存在的CIFAR-10数据集的60000个观察结果随机化。我知道构建神经网络可能没有那么重要,但是我是Python新手,我想学习使用这种编程语言进行数据处理。

因此,要导入数据集,请运行

from keras.datasets import cifar10
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()

这会自动为我提供火车和测试仪的默认细分;但我想将它们混合在一起。 我想到的步骤是:

  • 将训练集和测试集合并为形状为(60000,32,32,3)的数据集 X 和形状为(60000,1)的数据集 Y < / li>
  • 生成一些随机指标,将 X Y 数据集子集化,例如,训练集为50000 obs,测试集为10000 obs
  • 创建新数据集(以 ndarray 格式) X_train X_test Y_train Y_test 具有与原始形状相同的形状,以便我可以开始训练卷积神经网络

但是也许有更快的方法。

我尝试了不同的方法几个小时,但是我什么都没成功。有人可以帮我吗?非常感谢,谢谢。

2 个答案:

答案 0 :(得分:1)

您可以使用sklearn.model_selection.train_test_split拆分数据。如果您希望每次运行代码时都使用相同的随机索引选择,则可以设置random_state值,并且每次都将进行相同的测试/训练拆分。

from keras.datasets import cifar10
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()

# View first image
import matplotlib.pyplot as plt
plt.imshow(X_train[0])
plt.show()

enter image description here

import numpy as np
from sklearn.model_selection import train_test_split

# Concatenate train and test images
X = np.concatenate((X_train,X_test))
y = np.concatenate((Y_train,Y_test))

# Check shape
print(X.shape) # (60000, 32, 32, 3)

# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=10000, random_state=1234)

# Check shape
print(X_train.shape) # (50000, 32, 32, 3)

# View first image
plt.imshow(X_train[0])
plt.show()

enter image description here

答案 1 :(得分:0)

这是您要求的完整演示。首先,我们下载数据并将其随机化一次,然后将前50K用于训练,其余10K用于验证。

In [21]: import tensorflow  
In [22]: import tensorflow.keras.datasets as datasets    
In [23]: cifar10 = datasets.cifar10.load_data() 
In [24]: (X_train, Y_train), (X_test, Y_test) = datasets.cifar10.load_data() 

In [25]: X_train.shape, Y_train.shape 
Out[25]: ((50000, 32, 32, 3), (50000, 1))

In [26]: X_test.shape, Y_test.shape 
Out[26]: ((10000, 32, 32, 3), (10000, 1)) 

In [27]: import numpy as np
In [28]: X, Y = np.vstack((X_train, X_test)), np.vstack((Y_train, Y_test))  

In [29]: X.shape, Y.shape 
Out[29]: ((60000, 32, 32, 3), (60000, 1)) 

In [30]: # Shuffle only the training data along axis 0 
    ...: def shuffle_train_data(X_train, Y_train): 
    ...:     """called after each epoch""" 
    ...:     perm = np.random.permutation(len(Y_train)) 
    ...:     Xtr_shuf = X_train[perm] 
    ...:     Ytr_shuf = Y_train[perm] 
    ...:      
    ...:     return Xtr_shuf, Ytr_shuf 


In [31]: X_shuffled, Y_shuffled = shuffle_train_data(X, Y) 

In [32]: (X_train_new, Y_train_new) = X_shuffled[:50000, ...], Y_shuffled[:50000, ...] 

In [33]: (X_test_new, Y_test_new) = X_shuffled[50000:, ...], Y_shuffled[50000:, ...] 

In [34]: X_train_new.shape, Y_train_new.shape 
Out[34]: ((50000, 32, 32, 3), (50000, 1))

In [35]: X_test_new.shape, Y_test_new.shape 
Out[35]: ((10000, 32, 32, 3), (10000, 1))

我们有一个函数shuffle_train_data,该函数会始终如一地对数据进行重排,以使示例及其标签保持相同的顺序。