Keras图像分类模型的增量训练

时间:2018-10-23 03:00:29

标签: python machine-learning scikit-learn keras deep-learning

我使用了较小的VGG模型,并修改了以下教程的训练脚本来训练以前训练过的模型。 模型和脚本的原始来源: https://www.pyimagesearch.com/2018/04/16/keras-and-convolutional-neural-networks-cnns/

这就是我所做的:

第一次培训

  • 使用本教程中的原始训练脚本来训练具有2类A和B类的图像数据集的模型

第二次培训:

  • 加载经过训练的keras模型,并使用以下修改后的训练脚本使用C类的图像数据集进行训练,而没有A和B类的任何新数据 (加载和保存模型方法是从以下stackoverflow线程引用的:Loading a trained Keras model and continue training
  • 加载第一个会话的腌制标签数组,在第二个会话中将其与新的标签数组合并,然后将其保存在lb.pickle中

结果:

第二节课后训练有素的模型只能在第二节课中识别新课程。似乎在第一节课上训练的其他班级都丢失了。只是不起作用。

我的问题: 如何修复以下脚本以使增量培训有效? 还是其他与我的情况类似的增量培训建议或参考?

我修改后的训练脚本:

from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
from keras.preprocessing.image import img_to_array
from keras.models import load_model

from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from smallervggnet import SmallerVGGNet
from imutils import paths
import numpy as np
import argparse, os, sys
import random
import pickle
import cv2

ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
help="path to input dataset (i.e., directory of images)")
ap.add_argument("-im", "--loadmodel", required=True,
help="path to model to be loaded")
ap.add_argument("-m", "--model", required=True,
help="path to output model")
ap.add_argument("-l", "--labelbin", required=True,
help="path to output label binarizer")
ap.add_argument("-p", "--plot", type=str, default="plot.png",
help="path to output accuracy/loss plot")
args = vars(ap.parse_args())


EPOCHS = 100
INIT_LR = 1e-3
BS = 10
IMAGE_DIMS = (256, 256, 3)

data = []
labels = []


print("[INFO] loading images...")
imagePaths = sorted(list(paths.list_images(args["dataset"])))
random.seed(42)
random.shuffle(imagePaths)

for imagePath in imagePaths:
    image = cv2.imread(imagePath)
    image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0]))
    image = img_to_array(image)
    data.append(image)


label = imagePath.split(os.path.sep)[-2]
labels.append(label)


data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
print("[INFO] data matrix: {:.2f}MB".format(
data.nbytes / (1024 * 1000.0)))


lb = LabelBinarizer()
bLabels = lb.fit_transform(labels)


(trainX, testX, trainY, testY) = train_test_split(data,
bLabels, test_size=0.2, random_state=42)


#add these 2 lines to avoid error
trainY = np_utils.to_categorical(trainY, 2)
testY = np_utils.to_categorical(testY, 2)

aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1,
height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
horizontal_flip=True, fill_mode="nearest")


print("[INFO] load previously trained model")
modelPath = args["loadmodel"]
model = load_model(modelPath)


print("[INFO] training network...")
H = model.fit_generator(
aug.flow(trainX, trainY, batch_size=BS),
validation_data=(testX, testY),
steps_per_epoch=len(trainX) // BS,
epochs=EPOCHS, verbose=1)


print("[INFO] serializing network...")
model.save(args["model"])


# my attempt to keep the labels of all the training session in label binarizer
prevArray = './train_output/previous_data_array.pickle'

arrPickle = labels

if os.path.getsize(prevArray) > 0:  
    prev = pickle.loads(open(prevArray, 'rb').read())
    arrPickle = np.concatenate((prev,labels), axis=0)

lb = LabelBinarizer()
lb.fit_transform(arrPickle)

print("[INFO] serializing combined label array...")
f = open(prevArray, "wb")
f.write(pickle.dumps(arrPickle))
f.close()

print("[INFO] serializing label binarizer...")
f = open(args["labelbin"], "wb")
f.write(pickle.dumps(lb))
f.close()

0 个答案:

没有答案
相关问题