使用validation_split修改我的形状,为什么?

时间:2019-07-17 00:30:25

标签: tensorflow keras tf.keras

我有一个有效的模型,我希望开始使用validation_split(=〜0.1)即时进行验证。当我通过除0.0以外的validation_split时,出现错误。

我一直在调整我传递给fit()的batch_size值,以及传递给tf.keras.layers.Conv2D()的那个值,实际上使它成比例。不开心

这就是我制作模型的方式:


    def make_convnet_model(flags, shape):
        model = tf.keras.models.Sequential(
            [
                tf.keras.layers.Conv2D(32,(8,8), strides=2, activation='relu',input_shape=shape,batch_size=flags.batch_size,name='conv2d_1'),
                tf.keras.layers.Conv2D(24, (4,4), strides=1, activation='relu',name='conv2d_2'),
                tf.keras.layers.MaxPool2D(),
                tf.keras.layers.Conv2D(16, (3, 3), strides=2, activation='sigmoid', input_shape=shape,batch_size=flags.batch_size, name='conv2d_3'),
                tf.keras.layers.Conv2D(8, (3, 3), strides=1, activation='sigmoid', name='conv2d_4'),
                tf.keras.layers.MaxPool2D(),
                tf.keras.layers.Flatten(),
                tf.keras.layers.Dense(64, activation='sigmoid', name='d3'),
                tf.keras.layers.Dense(5, activation='softmax', name='softmax_d4')
            ])

        return model

这就是我所说的fit():

    history = model.fit(x=X, y=Y, batch_size=flags.batch_size, epochs=flags.epochs, callbacks=[tensorboard,logger], verbose=flags.verbosity, validation_split=flags.validation_split)
     Here is my reward. I have taken out some of the spooge:
Namespace(***batch_size=20***, columns=320, csv_path='../csv/', data_path='f:/downloads/aptos2019-blindness-detection/', epochs=2,
     

灰色=否,learning_rate = 0.001,损失='mean_squared_error',   metric = ['accuracy'],model ='conv2d',rows = 320,   test_path_fragment ='test_images /',   train_path_fragment ='train_images /',validation_split = 0.1,   冗长= 2)       Tensorflow版本:1.14.0

Processed data path:f:/downloads/aptos2019-blindness-detection/train_images/color_320x320/
***Train on 18 samples, validate on 2 samples***
Epoch 1/2
Traceback (most recent call last):
  File "F:/projects/retinas/retina.py", line 212, in <module>
    main(sys.argv)
  File "F:/projects/retinas/retina.py", line 122, in main
    history = model.fit(x=X, y=Y, batch_size=flags.batch_size, epochs=flags.epochs, callbacks=[tensorboard,logger],
     

verbose = flags.verbosity,validation_split = flags.validation_split)         文件“ C:\ Users \ WascallyWabbit \ AppData \ Local \ Programs \ Python \ Python36 \ lib \ site-packages \ tensorflow \ python \ keras \ engine \ training.py”,   780号线,适合           steps_name ='steps_per_epoch')         文件“ C:\ Users \ WascallyWabbit \ AppData \ Local \ Programs \ Python \ Python36 \ lib \ site-packages \ tensorflow \ python \ keras \ engine \ training_arrays.py”,   第363行,在model_iteration中           batch_outs = f(ins_batch)         文件“ C:\ Users \ WascallyWabbit \ AppData \ Local \ Programs \ Python \ Python36 \ lib \ site-packages \ tensorflow \ python \ keras \ backend.py”,   第3292行,在致电中           run_metadata = self.run_metadata)         文件“ C:\ Users \ WascallyWabbit \ AppData \ Local \ Programs \ Python \ Python36 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,   第1458行,在致电           run_metadata_ptr)        tensorflow.python.framework.errors_impl.InvalidArgumentError:不兼容的形状:[20,5]与[18,5]          [[{{node Adam / gradients / loss / softmax_d4_loss / SquaredDifference_grad / BroadcastGradientArgs}}]]

1 个答案:

答案 0 :(得分:0)

问题源于我对Conv2D()的调用中不必要地指定了batch_size。现在,我接受该参数的默认值,并且它可以正常工作。

不知道为什么。不在乎:-|