I am currently working on the cats vs dogs classification task on kaggle by implementing a deep convNet. The following lines of code is used for data preprocessing:
def label_img(img):
word_label = img.split('.')[-3]
if word_label == 'cat': return [1,0]
elif word_label == 'dog': return [0,1]
def create_train_data():
training_data = []
for img in tqdm(os.listdir(TRAIN_DIR)):
label = label_img(img)
path = os.path.join(TRAIN_DIR,img)
img = cv2.resize(cv2.imread(path,cv2.IMREAD_GRAYSCALE),IMG_SIZE,IMG_SIZE))
training_data.append([np.array(img),np.array(label)])
shuffle(training_data)
return training_data
train_data = create_train_data()
X_train = np.array([i[0] for i in train_data]).reshape(-1, IMG_SIZE,IMG_SIZE,1)
Y_train =np.asarray([i[1] for i in train_data])
I want to implement a function that replicates the following function provided in the tensorflow deep MNIST tutorial
batch = mnist.train.next_batch(100)
答案 0 :(得分:2)
除了生成批次之外,您可能还想随机重新安排每批次的数据。
EPOCH = 100
BATCH_SIZE = 128
TRAIN_DATASIZE,_,_,_ = X_train.shape
PERIOD = TRAIN_DATASIZE/BATCH_SIZE #Number of iterations for each epoch
for e in range(EPOCH):
idxs = numpy.random.permutation(TRAIN_DATASIZE) #shuffled ordering
X_random = X_train[idxs]
Y_random = Y_train[idxs]
for i in range(PERIOD):
batch_X = X_random[i * BATCH_SIZE:(i+1) * BATCH_SIZE]
batch_Y = Y_random[i * BATCH_SIZE:(i+1) * BATCH_SIZE]
sess.run(train,feed_dict = {X: batch_X, Y:batch_Y})
答案 1 :(得分:0)
这个code是一个很好的例子来提出生成批处理的功能。
简单解释一下,你只需要为x_train和y_train提供两个数组:
batch_inputs = np.ndarray(shape=(batch_size), dtype=np.int32)
batch_labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
设置列车数据,如:
batch_inpouts[i] = ...
batch_labels[i, 0] = ...
最后将数据集传递给session:
_, loss_val = session.run([optimizer, loss], feed_dict={train_inputs: batch_inputs, train_labels:batch_labels})