Numpy的索引超出范围

时间:2019-08-19 06:47:20

标签: python python-3.x deep-learning vgg-net

我正在尝试对VGG16上的HDF5文件(flowers17数据集)进行微调。 抛出错误消息:

  `index 3 is out of bounds for axis 1 with size 2`

是因为我试图用数据集的RGB平均值来喂它而生气吗?我应该使用0/1缩放比例吗?那我该如何重写代码?请给我建议...这是我的train文件的一部分

means = json.loads(open(config.DATASET_MEAN).read())

# construct the image generator for data augmentation
aug = ImageDataGenerator(rotation_range=30, width_shift_range=0.1,
    height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
    horizontal_flip=True, fill_mode="nearest")

# initialize the image preprocessors
aap = AspectAwarePreprocessor(224, 224)
mp = MeanPreprocessor(means["R"], means["G"], means["B"])
iap = ImageToArrayPreprocessor()
sp = SimplePreprocessor(224, 224)
pp = PatchPreprocessor(224, 224)

# initialize the training and validation dataset generators
trainGen = HDF5DatasetGenerator(config.TRAIN_HDF5, 16, aug=aug,
    preprocessors=[pp, mp, iap], classes=2)
testGen = HDF5DatasetGenerator(config.TEST_HDF5, 16,
    preprocessors=[sp, mp, iap], classes=2)


# load the VGG16 network, ensuring the head FC layer sets are left
# off
baseModel = VGG16(weights="imagenet", include_top=False,
    input_tensor=Input(shape=(224, 224, 3)))

# initialize the new head of the network, a set of FC layers
# followed by a softmax classifier
headModel = FCHeadNet.build(baseModel, 17, 256)

# place the head FC model on top of the base model -- this will
# become the actual model we will train
model = Model(inputs=baseModel.input, outputs=headModel)

# loop over all layers in the base model and freeze them so they
# will *not* be updated during the training process
for layer in baseModel.layers:
    layer.trainable = False

# compile our model (this needs to be done after our setting our
# layers to being non-trainable
print("[INFO] compiling model...")
opt = RMSprop(lr=0.001)
model.compile(loss="categorical_crossentropy", optimizer=opt,
    metrics=["accuracy"])

# train the head of the network for a few epochs (all other
# layers are frozen) -- this will allow the new FC layers to
# start to become initialized with actual "learned" values
# versus pure random
print("[INFO] training head...")
model.fit_generator(trainGen.generator(),
    validation_data=testGen.generator(), epochs=2,
    steps_per_epoch=trainGen.numImages // 16, verbose=1,
    validation_steps=testGen.numImages // 16)

print("[INFO] evaluating after initialization...")
predictions = model.predict(testGen.generator(), batch_size=16)
print(classification_report(testGen.generator().argmax(axis=1),
    predictions.argmax(axis=1), target_names=classNames))

0 个答案:

没有答案