将TFRecords与keras一起使用

时间:2019-01-30 12:03:55

标签: python tensorflow keras tensorflow-datasets

我已经将图像数据库转换为两个TFRecords,一个用于训练,另一个用于验证。我想使用这两个文件来训练使用keras的简单模型,但遇到与数据形状有关的错误,我无法理解。

这是代码(所有大写字母的变量在其他地方定义):

def _parse_function(proto):
    f = {
        "x": tf.FixedLenSequenceFeature([IMG_SIZE[0] * IMG_SIZE[1]], tf.float32, default_value=0., allow_missing=True),
        "label": tf.FixedLenSequenceFeature([1], tf.int64, default_value=0, allow_missing=True)
    }
    parsed_features = tf.parse_single_example(proto, f)

    x = tf.reshape(parsed_features['x'] / 255, (IMG_SIZE[0], IMG_SIZE[1], 1))
    y = tf.cast(parsed_features['label'], tf.float32)
    return x, y

def load_dataset(input_path, batch_size, shuffle_buffer):
    dataset = tf.data.TFRecordDataset(input_path)
    dataset = dataset.shuffle(shuffle_buffer).repeat()  # shuffle and repeat
    dataset = dataset.map(_parse_function, num_parallel_calls=16)
    dataset = dataset.batch(batch_size).prefetch(1)  # batch and prefetch

    return dataset.make_one_shot_iterator()

train_iterator = load_dataset(TRAIN_TFRECORDS, BATCH_SIZE, SHUFFLE_BUFFER)
val_iterator = load_dataset(VALIDATION_TFRECORDS, BATCH_SIZE, SHUFFLE_BUFFER)

model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(IMG_SIZE[0], IMG_SIZE[1], 1)))
model.add(tf.keras.layers.Dense(1, 'sigmoid'))

model.compile(
    optimizer=tf.train.AdamOptimizer(),
    loss='binary_crossentropy',
    metrics=['accuracy']
)

model.fit(
    train_iterator,
    epochs=N_EPOCHS,
    steps_per_epoch=N_TRAIN // BATCH_SIZE,
    validation_data=val_iterator,
    validation_steps=N_VALIDATION // BATCH_SIZE

)

这是我得到的错误:

tensorflow.python.framework.errors_impl.InvalidArgumentError: data[0].shape = [3] does not start with indices[0].shape = [2]
     [[Node: training/TFOptimizer/gradients/loss/dense_loss/Mean_grad/DynamicStitch = DynamicStitch[N=2, T=DT_INT32, _class=["loc:@training/TFOptimizer/gradients/loss/dense_loss/Mean_grad/floordiv"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/TFOptimizer/gradients/loss/dense_loss/Mean_grad/range, training/TFOptimizer/gradients/loss/dense_loss/Mean_3_grad/Maximum, training/TFOptimizer/gradients/loss/dense_loss/Mean_grad/Shape/_35, training/TFOptimizer/gradients/loss/dense_loss/Mean_3_grad/Maximum/_41)]]

(我知道这里定义的模型不是用于图像分析的好模型,我只是采用了最简单的架构来重现错误)

2 个答案:

答案 0 :(得分:3)

更改:

"label": tf.FixedLenSequenceFeature([1]...

进入:

"label": tf.FixedLenSequenceFeature([]...

很遗憾,网站文档中没有对此进行解释,但是可以在github上FixedLenSequenceFeature的文档字符串中找到some explanation。基本上,如果您的数据由一个维度(加上一个批次维度)组成,则无需指定它。

答案 1 :(得分:2)

您忘记了the example的这一行:

parsed_features = tf.parse_single_example(proto, f)

将其添加到_parse_function

此外,您只能返回dataset对象。 Keras支持迭代器以及tf.data.Dataset的实例。另外,先随机播放并重复然后解析tfexamples看起来有点怪异。这是一个对我有用的示例代码:

def dataset(filenames, batch_size, img_height, img_width, is_training=False):
    decoder = TfExampleDecoder()

    def preprocess(image, boxes, classes):
            image = preprocess_image(image, resize_height=img_height, resize_width=img_width)
        return image, groundtruth

    ds = tf.data.TFRecordDataset(filenames)
    ds = ds.map(decoder.decode, num_parallel_calls=8)
    if is_training:
        ds = ds.shuffle(1000 + 3 * batch_size)
    ds = ds.apply(tf.contrib.data.map_and_batch(map_func=preprocess, batch_size=batch_size, num_parallel_calls=8))
    ds = ds.repeat()
    ds = ds.prefetch(buffer_size=batch_size)
    return ds


train_dataset = dataset(args.train_data, args.batch_size,
                        args.img_height, args.img_width,
                        is_training=True)


model.fit(train_dataset,
          steps_per_epoch=args.steps_per_epoch,
          epochs=args.max_epochs,
          callbacks=callbacks,
          initial_epoch=0)

您的数据或预处理管道似乎有问题,而不是Keras。尝试使用以下调试代码检查从数据集中获取的内容:

ds = dataset(args.data, args.img_height, args.img_width, is_training=True)

image_t, classes_t = ds.make_one_shot_iterator().get_next()

with tf.Session() as sess:
    while True:
        image, classes = sess.run([image_t, classes_t])
        # Do something with the data: display, log etc.