我想使用Keras为Faces数据集实现自动编码器。
我使用train_on_batch
是因为数据集太大,但是我遇到了这个问题:
for i in range(10):
batch_index = 0
while batch_index <= train_data.batch_index:
data = train_data.next()
result = train_result.next()
model.train_on_batch(data[0],result[0])
batch_index = batch_index + 1
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-54-d7d64e954a89> in <module>
4 data = train_data.next()
5 result = train_result.next()
----> 6 model.train_on_batch(data[0],result[0])
7 batch_index = batch_index + 1
~/.local/lib/python3.5/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
1209 x, y,
1210 sample_weight=sample_weight,
-> 1211 class_weight=class_weight)
1212 if self._uses_dynamic_learning_phase():
1213 ins = x + y + sample_weights + [1.]
~/.local/lib/python3.5/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
787 feed_output_shapes,
788 check_batch_axis=False, # Don't enforce the batch size.
--> 789 exception_prefix='target')
790
791 # Generate sample-wise weight values given the `sample_weight` and
~/.local/lib/python3.5/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
136 ': expected ' + names[i] + ' to have shape ' +
137 str(shape) + ' but got array with shape ' +
--> 138 str(data_shape))
139 return data
140
ValueError: Error when checking target: expected conv2d_transpose_21 to have shape (250, 250, 1) but got array with shape (250, 250, 3)
我的模特是同伴:
Input_Layer = keras.Input((250,250,3))
x = keras.layers.Conv2D(20,5,activation='relu')(Input_Layer)
x = keras.layers.MaxPooling2D(2)(x)
x = keras.layers.Conv2D(20,2,activation = 'relu')(x)
x = keras.layers.MaxPooling2D(2)(x)
encoded = x
x = keras.layers.UpSampling2D(2)(x)
x = keras.layers.Conv2DTranspose(20,2,activation='relu')(x)
x = keras.layers.UpSampling2D(2)(x)
x = keras.layers.Conv2DTranspose(20,5,activation= 'relu')(x)
model = keras.Model(input = Input_Layer ,output = x)
我正在使用keras ImageDataGenerator
加载图像,该图像加载:
train_data = trainGenerator.flow_from_directory('lfw',batch_size=67,target_size=(250, 250))
Found 13199 images belonging to 1 classes.
这是所有代码
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import matplotlib.pyplot as plt
import numpy as np
import keras
def cutHalf(img):
for j in range(125):
for i in range(250):
img[i][j][0]=1
img[i][j][1]=1
img[i][j][2]=1
return img
img_width = 250
img_height = 250
train_datagen = ImageDataGenerator(rescale=1./255)
train_datagen2 = ImageDataGenerator(rescale=1./255,preprocessing_function=cutHalf)
train_generator = train_datagen.flow_from_directory(
'lfw',target_size=(img_width, img_height),
class_mode=None)
train_generator2 = train_datagen2.flow_from_directory(
'lfw',target_size=(img_width, img_height),
class_mode=None)
def fixed_generator(generator,generator2):
batch_index = 0
while batch_index <= generator.batch_index:
yield (generator.next(), generator2.next())
Input_Layer = keras.Input(shape=(img_width, img_height,3))
x = keras.layers.Conv2D(20,5,activation='relu')(Input_Layer)
x = keras.layers.MaxPooling2D(2)(x)
x = keras.layers.Conv2D(20,2,activation = 'relu')(x)
x = keras.layers.MaxPooling2D(2)(x)
encoded = x
x = keras.layers.UpSampling2D(2)(x)
x = keras.layers.Conv2DTranspose(20,2,activation='relu')(x)
x = keras.layers.UpSampling2D(2)(x)
x = keras.layers.Conv2DTranspose(20,5,activation= 'relu')(x)
model = keras.Model(input = Input_Layer ,output = x)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit_generator(
fixed_generator(train_generator,train_generator2),
nb_epoch=20,
steps_per_epoch=50
)
答案 0 :(得分:0)
我假设train_data.next()
和train_result.next()
都返回一个大小为(1,250,250,3)
的数组。
当我尝试运行您的代码时,出现以下错误:
回溯(最近通话最近一次):
文件“”,第1行,在 runfile('/ Users / lorenzo / Documents / stackoverflow / auto_encoder.py',wdir ='/ Users / lorenzo / Documents / stackoverflow')
文件 “ /anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py”, 运行文件中的第705行 execfile(文件名,命名空间)
文件 “ /anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py”, 第102行,在execfile中 exec(compile(f.read(),文件名,'exec'),命名空间)
文件“ /Users/lorenzo/Documents/stackoverflow/auto_encoder.py”,行 43在 model.train_on_batch(onedata,oneresult)
文件 “ /anaconda3/lib/python3.6/site-packages/keras/engine/training.py”, 1211行,在train_on_batch中 class_weight = class_weight)
文件 “ /anaconda3/lib/python3.6/site-packages/keras/engine/training.py”, _standardize_user_data中的第789行 exception_prefix ='target')
文件 “ /anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py”, 第138行,位于standardize_input_data中 str(data_shape))
ValueError:检查目标时出错:预期转换为conv2d_transpose_6 形状为(250,250,20),但形状为(250,250,3)
这是说最后一个Conv2DTranspose层的预期目标大小是(250, 250, 20)
,但是您使用形状为(250, 250, 3)
的数组输入模型。
解决方案:
应该将x = keras.layers.Conv2DTranspose(20,5,activation= 'relu')(x)
更改为x = keras.layers.Conv2DTranspose(3,5,activation= 'relu')(x)
,以便模型的输出与您的目标大小相匹配。
编辑: 就像@DanielMöller所说的那样,损失应该是'categorical_crossentropy',最后一层的过滤器计数也应该是3。
以下是示例输出:
Found 530 images belonging to 1 classes.
Found 530 images belonging to 1 classes.
D:/D_Document/Github/keras_autoencoder.py:65: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=Tensor("in..., outputs=Tensor("co...)`
model = keras.Model(input = Input_Layer ,output = x)
D:/D_Document/Github/keras_autoencoder.py:78: UserWarning: The semantics of the Keras 2 argument `steps_per_epoch` is not the same as the Keras 1 argument `samples_per_epoch`. `steps_per_epoch` is the number of batches to draw from the generator at each epoch. Basically steps_per_epoch = samples_per_epoch/batch_size. Similarly `nb_val_samples`->`validation_steps` and `val_samples`->`steps` arguments have changed. Update your method calls accordingly.
steps_per_epoch=50
D:/D_Document/Github/keras_autoencoder.py:78: UserWarning: Update your `fit_generator` call to the Keras 2 API: `fit_generator(<generator..., epochs=20, steps_per_epoch=50)`
steps_per_epoch=50
Epoch 1/20
50/50 [==============================] - 102s 2s/step - loss: 0.6981 - acc: 0.6931
Epoch 2/20
50/50 [==============================] - 95s 2s/step - loss: 0.6406 - acc: 0.7584
Epoch 3/20
50/50 [==============================] - 92s 2s/step - loss: 0.6396 - acc: 0.7588
Epoch 4/20
50/50 [==============================] - 93s 2s/step - loss: 0.6381 - acc: 0.7543
Epoch 5/20
50/50 [==============================] - 93s 2s/step - loss: 0.6377 - acc: 0.7618
Epoch 6/20
50/50 [==============================] - 89s 2s/step - loss: 0.6357 - acc: 0.7569
Epoch 7/20
50/50 [==============================] - 91s 2s/step - loss: 0.6394 - acc: 0.7651
Epoch 8/20
50/50 [==============================] - 93s 2s/step - loss: 0.6380 - acc: 0.7660
Epoch 9/20
50/50 [==============================] - 93s 2s/step - loss: 0.6380 - acc: 0.7643
Epoch 10/20
50/50 [==============================] - 89s 2s/step - loss: 0.6399 - acc: 0.7669