无法将Tensorflow数据集和Keras模型一起用于图像分割

时间:2018-10-20 21:34:10

标签: python tensorflow keras image-segmentation tensorflow-datasets

我正在尝试使用张量流数据集在Keras中进行图像分割,但是当我尝试调用AttributeError: 'MapDataSet' has no attribute 'ndim'时遇到model.fit()错误。我在Google colab中使用Jupyter Notebook。代码的最后一行会产生错误。

import numpy as np
import tensorflow as tf
from tensorflow import keras 
import sys
from json import load
import math
import tensorflow as tf
from tensorflow.contrib import layers
from tensorflow.contrib.data import map_and_batch

export_file = 'export.json'
IMG_WIDTH = 256
IMG_HEIGHT = 256
with open(export_file) as f:
  export_json = load(f)
legend = export_json['legend']
tfrecord_paths = export_json['tfrecord_paths']

# Fairly certain nothing wrong here
# -----------------------------------------------
def _parse_tfrecord(serialized_example):
    example = tf.parse_single_example(
        serialized_example,
        features={
            'image/encoded': tf.FixedLenFeature([], tf.string),
            'image/filename': tf.FixedLenFeature([], tf.string),
            'image/ID': tf.FixedLenFeature([], tf.string),
            'image/format': tf.FixedLenFeature([], tf.string),
            'image/height': tf.FixedLenFeature([], tf.int64),
            'image/width': tf.FixedLenFeature([], tf.int64),
            'image/channels': tf.FixedLenFeature([], tf.int64),
            'image/colorspace': tf.FixedLenFeature([], tf.string),
            'image/segmentation/class/encoded': tf.FixedLenFeature([], tf.string),
            'image/segmentation/class/format': tf.FixedLenFeature([], tf.string),
            })
    image = tf.image.decode_image(example['image/encoded'])
    image.set_shape([IMG_WIDTH, IMG_HEIGHT, 3])
    label = tf.image.decode_image(example['image/segmentation/class/encoded'])
    label.set_shape([IMG_WIDTH, IMG_HEIGHT, 1])
    image_float = tf.to_float(image)
    label_float = tf.to_float(label)
    return (image_float, label_float)
# -----------------------------------------------

# Create training and testing datasets
test_set_size = math.floor(0.20 * len(tfrecord_paths))
training_dataset = tf.data.TFRecordDataset(tfrecord_paths)
training_dataset = training_dataset.skip(test_set_size)
training_dataset = training_dataset.map(_parse_tfrecord)

test_dataset = tf.data.TFRecordDataset(tfrecord_paths)
test_dataset = test_dataset.take(test_set_size)
test_dataset = test_dataset.map(_parse_tfrecord)

# Printing training_dataset yields: '<MapDataset shapes: ((256, 256, 3), (256, 256, 1)), types: (tf.float32, tf.float32)>'

# Load Inception v3
from keras.applications.inception_v3 import InceptionV3
model = InceptionV3(input_shape = (IMG_HEIGHT, IMG_WIDTH, 3), include_top=False, weights='imagenet')
model.compile(optimizer='RMSProp', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(training_dataset, epochs=10, steps_per_epoch=30) # -=CAUSES ERROR=-

我从How to Properly Combine TensorFlow's Dataset API and Keras?开始尝试使用第二种解决方案,但是即使将model.fit()更改为接受one_shot_iterator而不是tf.data.Dataset时,也遇到了相同的错误。

正如我在https://github.com/tensorflow/tensorflow/issues/20698中发现的那样,这似乎是一个更常见的问题。但是,这里发布的每个解决方案都导致了我之前遇到的相同错误。

0 个答案:

没有答案