tf.keras.Model.fit():损耗未减少,精度为0.0

时间:2019-07-01 08:20:29

标签: python-3.x tensorflow2.0 tf.keras

我从使用tf.keras.applications VGG16和来自.tfrecord的Imagenet数据集的tensorflow 2.0 API开始。为了评估我的日常工作,我想对验证数据集进行训练,在该数据集中应减少损失,而准确度应从文献中的值开始。但是,我的损失直接收敛到一个固定值(1.16),精度为0.000x。

我尝试了没有重量的训练,并以相同的静态行为开始了同样的损失。

当我将model.fit()与model.evaluate()和相同的数据集交换时,我将从文献中获得正确的值(准确性约70%)。

我尝试过Adam和SGD。由于硬件限制,当前批次大小为1。

def create_dataset(文件):

def _parse_function(proto):

    # description of the stored examples
    feature_description = {'image': tf.io.FixedLenFeature([], tf.string),
                           'label': tf.io.FixedLenFeature([], tf.int64)}

    # Load one example
    parsed_features = tf.io.parse_single_example(proto, feature_description)

    # Decode jpeg and apply network specific pre-processing
    parsed_features['image'] = tf.cast(tf.io.decode_jpeg(parsed_features['image']), tf.float32)

    if preprocessor is not None:
        parsed_features['image'] = preprocessor.preprocess_input(parsed_features['image'])

    if mode == 'training':
        parsed_features['label'] = tf.one_hot(parsed_features['label'], num_classes)

    return parsed_features['image'], parsed_features['label']

dataset = tf.data.TFRecordDataset(files)

# Maps the parser on every filepath in the array. You can set the number of parallel loaders here
dataset = dataset.map(_parse_function, num_parallel_calls=8)

# Set the batch size
dataset = dataset.batch(batch_size)

# repeat for training
if mode == 'training':
    dataset = dataset.repeat()

return dataset

def main():

model = tf.keras.applications.vgg16.VGG16(include_top=True,
                                          weights=None,
                                          classes=num_classes)

model.load_weights(weights_filepath)

dataset = create_dataset(files)

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(dataset)

0 个答案:

没有答案