I have problem with my image classification model using keras. This is the code which have binary class.
tried to set the number of images in datasets as equal.
this is a code for Keras model
train_data_dir = 'path'
validation_data_dir = 'path'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 10
batch_size = 16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.load_weights('second_try.h5')
and the model saved well. so I run the test code
from keras.models import load_model
from keras.preprocessing import image
import numpy as np
# dimensions of our images
img_width, img_height = 150, 150
# load the model we saved
model = load_model('modelpath')
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
#predicting multiple images at once
img = image.load_img('imgpath', target_size=(img_width, img_height))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
y = image.img_to_array(img)
y = np.expand_dims(y, axis=0)
images = np.vstack([x, y])
classes = model.predict_classes(images, batch_size=10)
print(classes)
and both images from different classes are printed as 1. Why does this happening?
答案 0 :(得分:0)
You should be saving the model or model weights after your training, rather than doing load_weights
after fit()
. So, 2 ways to do this
After model.fit()
, do model.save_weights('second_try.hdf5')
which only saves weights. To load the weights, you should first compile your model, and then call load_weights on the model as model.load_weights('second_try.hdf5')
After model.fit()
, do model.save('model.hdf5')
which saves the weights and the model structure to a single HDF5 file. Then you can use that HDF5 file with load()
to reconstruct the whole model, including weights.
model = load_model('model.hdf5')
Also, do check if your test data is being prepared correctly. As your training loop uses a generator to prepare the data, you should use a similar generator for preparing the test data as well.