Keras - 异常:生成器的输出应该是元组(x,y,sample_weight)或(x,y)。发现:没有

时间:2017-04-11 18:25:44

标签: python tensorflow neural-network deep-learning keras

获取此问题(使用KerasTensorflow),我们将非常感谢您的帮助。

我的目录设置为folder/folder/images - 用于培训和测试数据。

我做了一个循环来测试Resnet中的不同深度/ nb_layers,以及一些超级参数,如学习率,批量大小等。测试从4,6,8,10开始 - 所有通往20的路,然后给了我:

  

生成器的输出应该是元组(x,y,sample_weight)或(x,y)。   发现:无

我不明白为什么它可以用于少数迭代,然后失败。

我阅读here将keras更新为2.0,但被告知不要改变我老板的keras版本。我的版本是1.2.0。

我阅读here将所有标签转换为numpy数组,但keras文档说明在使用flow_from_directory中的'categorical'属性时标签已经发生这种情况

然后我读here将我的train_generator放入一个函数中,然后创建一个无限循环并产生结果,但这会导致数据在程序开始时反复加载。 “找到属于7类的350张图像”(重复10次),然后导致错误

"output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: <keras.preprocessing.image.DirectoryIterator object at 0x0000000063494BE0>"

这是原始错误的堆栈跟踪:

  

追踪(最近一次呼叫最后一次):

     

文件“”,第1行,in       runfile('K:/ Manufacturing Operations / Yield / Tools_Yield / PythonScripts / AI / ISL_DI / Resnet / resISL_Depth.py',   WDIR ='K:/制造   操作/收率/ Tools_Yield / PythonScripts / AI / ISL_DI / RESNET')

     

文件   “C:\用户\保罗\应用程序数据\本地\连续\ Anaconda3 \ LIB \站点包\ spyderlib \部件\ externalshell \ sitecustomize.py”   第714行,在runfile中       execfile(filename,namespace)

     

文件   “C:\用户\保罗\应用程序数据\本地\连续\ Anaconda3 \ LIB \站点包\ spyderlib \部件\ externalshell \ sitecustomize.py”   第89行,在execfile中       exec(compile(f.read(),filename,'e​​xec'),namespace)

     

档案“K:/制造业   操作/收率/ Tools_Yield / PythonScripts / AI / ISL_DI / RESNET / resISL_Depth.py”,   第233行,在       回调= callbacks_list)

     

文件   “C:\用户\保罗\应用程序数据\漫游\ Python的\ Python35 \站点包\ keras \发动机\ training.py”   第1481行,在fit_generator中       STR(generator_output))

     

ValueError:生成器的输出应该是一个元组(x,y,   sample_weight)或(x,y)。发现:无

以下是vars以外的代码

rep=0
for i in range(retrainings + 1):
    #lr_init = [5, 1, .1, .01]
    while rep != len(layers) - 1:
        lr_init = [5, 1]
        for lr_val in lr_init:


            decay_init = .1
            epochs_drop = 20
            patience=60
            # learning rate schedule
            def step_decay(epoch):
                initial_lrate = lr_val
                drop = 0.1
                epochs_drop = 60
                lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
                #print('\nLR: {:.6f}\n'.format(lrate))
                return lrate

            momentum_init=0.9
            sgd = SGD(lr=lr_val, decay=decay_init, momentum=momentum_init, nesterov=False)


            ##reduce learning rate when loss has stopped improving
            #lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.1), cooldown=0, patience=5, min_lr=0.5e-6)
            ##stop training when accuracy has stopped improving
            early_stopper = EarlyStopping(monitor='val_acc', min_delta=0.001, patience=50)
            #csv_logger = CSVLogger('resnet18_cifar10.csv')

            repititions = 3
            #epochs=[105]
            epochs=[200]
            drop_out=[0]
            #batchsize=[2, 4, 8, 10]
            batchsize=[2, 5, 10]
            zoom=[0]
            shear=[0] 
            channelshift=[0]
            featurewise=[False]
            samplewise=[False]
            rotation=[0]

            nb_train_samples = 350
            nb_validation_samples = 140

            colormode='rgb'

             # input image dimensions
            img_width, img_height = 224, 224
            nb_classes=7                                    
            img_channels = 3

            for epoch_val in epochs:
                for dropout_val in drop_out:
                    for batchsize_val in batchsize:
                        for zoom_val in zoom:
                            for shear_val in shear:
                                for channelshift_val in channelshift:
                                    for featurewise_val in featurewise:
                                        for samplewise_val in samplewise:
                                            for rotation_val in rotation:
                                                for r in range(repititions):

                #                                    np.random.seed(7)
                #                                    tf.set_random_seed(7)    

                                                    train_data_dir = basepath + pathlist[0] 
                                                    validation_data_dir = basepath + pathlist[1] 

                                                    #############################################
                                                    #############################################

                                                    params={}    
                                                    params['epochs']=epoch_val
                                                    params['drop_out']=dropout_val
                                                    params['batchsize']=batchsize_val
                                                    params['zoom']=zoom_val
                                                    params['shear']=shear_val
                                                    params['channelshift']=channelshift_val
                                                    params['featurewise']=featurewise_val
                                                    params['samplewise']=samplewise_val
                                                    params['rotation']=rotation_val
                                                    params['lr_init']=lr_val
                                                    params['momentum_init']=momentum_init
                                                    params['decay_init']=decay_init
                                                    params['epochs_drop']=epochs_drop
                                                    params['img_size']=list([img_width,img_height])
                                                    params['patience']=patience                                         

                                                    total = 0
                                                    currentlayer = [i * 2 for i in layers[rep]]
                                                    total = sum(currentlayer) + 2
                                                    savefilename='resnet_' + str(total) + '_BKM_lr_' + str(lr_val) + '_batchSize_' + str(batchsize_val) + '_repition' + str((r+1)) + '_Study' 
                                                    total = 0
                                                    with tf.device('/gpu:0'):

                                                        model = resnet_iter.ResnetBuilder.build_resnet_34((img_channels, img_width, img_height), nb_classes, layers[rep])
                                                        model.compile(loss='categorical_crossentropy',
                                                                      optimizer=sgd,
                                                                      metrics=['accuracy'])

                                                        train_datagen = ImageDataGenerator(
                                                            featurewise_center=False,  # set input mean to 0 over the dataset
                                                            samplewise_center=False,  # set each sample mean to 0
                                                            featurewise_std_normalization=featurewise_val,  # divide inputs by std of the dataset
                                                            samplewise_std_normalization=samplewise_val,  # divide each input by its std
                                                            zca_whitening=False,  # apply ZCA whitening
                                                            channel_shift_range=channelshift_val, #VGG set to 0
                                                            fill_mode="reflect", #VGG set to reflect
                                                            rotation_range=rotation_val,  # randomly rotate images in the range (degrees, 0 to 180)
                                                            rescale=1./255, #VGG set to 1./255
                                                            width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width) - VGG set to 0
                                                            height_shift_range=0.1,  # randomly shift images vertically (fraction of total height) - VGG set to 0
                                                            shear_range=shear_val, #VGG set to 0
                                                            zoom_range=zoom_val, #VGG set to 0.1
                                                            horizontal_flip=True,  # randomly flip images
                                                            vertical_flip=True)  # randomly flip images VGG set to True

                                                        test_datagen = ImageDataGenerator(rescale=1./255)

                                                        train_generator = train_datagen.flow_from_directory(
                                                            train_data_dir,
                                                            target_size=(img_width, img_height),
                                                            batch_size=batchsize_val,
                                                            shuffle=True,
                                                            color_mode=colormode,
                                                            class_mode='categorical')

                                                        validation_generator = test_datagen.flow_from_directory(
                                                            validation_data_dir,
                                                            target_size=(img_width, img_height),
                                                            batch_size=batchsize_val,
                                                            shuffle=True,
                                                            color_mode=colormode,
                                                            class_mode='categorical')

                                                        lrate = LearningRateScheduler(step_decay)
                                                        callbacks_list = [lrate, early_stopper]

                                                        try:
                                                            A=model.fit_generator(
                                                                train_generator,
                                                                samples_per_epoch=nb_train_samples,
                                                                nb_epoch=epoch_val,
                                                                validation_data=validation_generator,
                                                                nb_val_samples=nb_validation_samples,
                                                                callbacks=callbacks_list)
                                                        except:
                                                            print("train_generator: " + train_generator)
                                                            print("train_data_dir: " + train_data_dir)
                                                            files=os.listdir(train_data_dir)
                                                            print(len(files))

0 个答案:

没有答案