低精度图像分类器VGG16模型

时间:2019-12-04 10:56:28

标签: python

如何使用VGG16模型制作图像分类器。我只有500个数据集。我正在尝试比较MRI扫描灰度。我只是有一个问题,我获得的最佳价值准确性是在50个纪元时达到65%。

这里是我的代码:

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from PIL import ImageFile, Image
print(Image.__file__)
import numpy
import matplotlib.pyplot as plt

# dimensions of our images.
img_width, img_height = 256, 256

train_data_dir = r'C:\Users\Acer\imagerec\Brain\TRAIN'
validation_data_dir = r'C:\Users\Acer\imagerec\Brain\VAL'
nb_train_samples = 140
nb_validation_samples = 40
epochs = 20
batch_size = 5

if K.image_data_format() == 'channels_first':
    input_shape = (1, img_width, img_height)
else:
    input_shape = (img_width, img_height, 1)

from keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.layers import Dense

vgg = VGG16(include_top=False, weights='imagenet', input_shape=(), pooling='avg')
x = vgg.output
x = Dense(1, activation='sigmoid')(x)
model = Model(vgg.input, x)
model.summary()

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
import seaborn as sns

test_steps_per_epoch = numpy.math.ceil(validation_generator.samples / validation_generator.batch_size)

predictions = model.predict_generator(validation_generator, steps=test_steps_per_epoch)
# Get most likely class
predicted_classes = numpy.argmax(predictions, axis=1)
true_classes = validation_generator.classes
class_labels = list(validation_generator.class_indices.keys())
report = classification_report(true_classes, predicted_classes, target_names=class_labels)
print(report)

cm=confusion_matrix(true_classes,predicted_classes)

sns.heatmap(cm, annot=True)

print(cm)

plt.show()

是否要添加更多的数据集来提高值的准确性,还是应该更改代码或对其进行微调?如果是,怎么办?

0 个答案:

没有答案