keras错误得到了意想不到的关键字参数' epochs'

时间:2018-03-15 13:48:25

标签: python tensorflow keras

我试图在Keras训练一个网络来对图像进行分类,在调试完最后一个问题之后得到了一个意想不到的关键时期

#!/bin/bash -eo pipefail
pip install --user codecov
Collecting codecov
  Downloading codecov-2.0.15-py2.py3-none-any.whl
Collecting requests>=2.7.9 (from codecov)
  Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)
    100% |████████████████████████████████| 92kB 3.5MB/s 
Collecting coverage (from codecov)
  Downloading coverage-4.5.1-cp27-cp27mu-manylinux1_x86_64.whl (199kB)
    100% |████████████████████████████████| 204kB 3.2MB/s 
Collecting urllib3<1.23,>=1.21.1 (from requests>=2.7.9->codecov)
  Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)
    100% |████████████████████████████████| 133kB 3.4MB/s 
Collecting idna<2.7,>=2.5 (from requests>=2.7.9->codecov)
  Downloading idna-2.6-py2.py3-none-any.whl (56kB)
    100% |████████████████████████████████| 61kB 2.7MB/s 
Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.7.9->codecov)
  Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)
    100% |████████████████████████████████| 143kB 3.4MB/s 
Collecting certifi>=2017.4.17 (from requests>=2.7.9->codecov)
  Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB)
    100% |████████████████████████████████| 153kB 3.3MB/s 
Installing collected packages: urllib3, idna, chardet, certifi, requests, coverage, codecov
Successfully installed certifi-2018.1.18 chardet-3.0.4 codecov-2.0.15 coverage-4.5.1 idna-2.6 requests-2.18.4 urllib3-1.22

此时我已删除了纪元,但仍然收到同样的错误

muiruri_samuel@training-2:~/google-landmark-recognition-challenge$ python train.py
Using TensorFlow backend.
Found 981214 images belonging to 14951 classes.
Found 237925 images belonging to 14951 classes.
Epoch 1/1
2018-03-15 13:35:19.822304: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instruc
tions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
  File "train.py", line 74, in <module>
    validation_data=validation_generator)
  File "/home/muiruri_samuel/.local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 91, in w
rapper
    return func(*args, **kwargs)
  File "/home/muiruri_samuel/.local/lib/python2.7/site-packages/keras/models.py", line 1276, in fit_genera
tor
    initial_epoch=initial_epoch)
  File "/home/muiruri_samuel/.local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 91, in w
rapper
    return func(*args, **kwargs)
  File "/home/muiruri_samuel/.local/lib/python2.7/site-packages/keras/engine/training.py", line 2224, in f
it_generator
    class_weight=class_weight)
  File "/home/muiruri_samuel/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1883, in t
rain_on_batch
    outputs = self.train_function(ins)
  File "/home/muiruri_samuel/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line
 2478, in __call__
    **self.session_kwargs)
TypeError: run() got an unexpected keyword argument 'epochs'

我也使用了时代和批次,但现在我需要它先工作。逻辑中的模型具有文件夹from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense from keras import backend as K from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint # dimensions of our images. img_width, img_height = 150, 150 train_data_dir = 'training_images' validation_data_dir = 'validation_images' nb_train_samples = 2000 nb_validation_samples = 800 epochs = 50 batch_size = 16 if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3) model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(14951, activation="softmax")) monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=0, mode='auto') checkpointer = ModelCheckpoint(filepath="best_weights.hdf5", verbose=0, save_best_only=True) # save best model model.compile(loss='categorical_crossentropy', optimizer='adam', callbacks=[monitor,checkpointer], epochs=1000, metrics=['accuracy']) # this is the augmentation configuration we will use for training train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) # this is the augmentation configuration we will use for testing: # only rescaling test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='categorical') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='categorical') model.fit_generator( train_generator, validation_data=validation_generator) model.load_weights('best_weights.hdf5') # load weights from best model model.save('last_model.h5') ,其具有子文件夹,这些子文件夹是具有这些子文件夹中的图像的图像的类。然后是training_images随机抽样20%的训练图像进行验证。

1 个答案:

答案 0 :(得分:0)

model.compile没有采用epochs参数。只有fitfit_generator可以。