Autoencoder自定义数据集张量流2.3 ValueError:使用数据集作为输入时不支持y参数

时间:2020-10-05 19:16:49

标签: tensorflow tensorflow2.0 tensorflow-datasets tensorflow2

我正在尝试在Tensorflow 2.3中实现自动编码器。我将自己存储在磁盘上的图像数据集作为输入。有人可以向我解释如何以正确的方式完成此操作吗?

我尝试使用tf.keras.preprocessing.image_dataset_from_directory()将数据加载到目录中,但是当我使用上述方法中的数据进行开始训练时,出现以下错误。

“ ValueError:使用数据集作为输入时,不支持y参数。”

PFB我正在运行的代码

'''

import tensorflow as tf
from convautoencoder import ConvAutoencoder
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
import numpy as np

EPOCHS = 25
batch_size = 1
img_height = 180
img_width = 180
data_dir = "/media/aniruddha/FE47-91B8/Laptop_Backup/Auto-Encoders/Basic/data"

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
  data_dir,
  validation_split=0.2,
  subset="training",
  seed=123,
  image_size=(img_height, img_width),
  batch_size=batch_size)

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
  data_dir,
  validation_split=0.2,
  subset="validation",
  seed=123,
  image_size=(img_height, img_width),
  batch_size=batch_size)

(encoder, decoder, autoencoder) = ConvAutoencoder.build(224, 224, 3)
opt = Adam(lr=1e-3)
autoencoder.compile(loss="mse", optimizer=opt)

H = autoencoder.fit(    train_ds, train_ds, validation_data=(val_ds, val_ds),   epochs=EPOCHS, batch_size=batch_size)

'''

2 个答案:

答案 0 :(得分:1)

我解决了这个问题。我没有将输入数据集作为元组输入模型进行训练。一旦我纠正了培训的开始。

答案 1 :(得分:0)

我使用生成器将输入数据作为元组提供给自动编码器。 请在下面找到我的代码。

# initialize the training training data augmentation object
trainAug = ImageDataGenerator(rescale=1. / 255)

valAug = ImageDataGenerator(rescale=1. / 255)

# initialize the training generator
trainGen = trainAug.flow_from_directory(
    config.TRAIN_PATH,
    class_mode="input",
    classes=None,
    target_size=(64, 64),
    color_mode="grayscale",
    shuffle=True,
    batch_size=BS)
# initialize the validation generator
valGen = valAug.flow_from_directory(
    config.TRAIN_PATH,
    class_mode="input",
    classes=None,
    target_size=(64, 64),
    color_mode="grayscale",
    shuffle=False,
    batch_size=BS)
# initialize the testing generator
testGen = valAug.flow_from_directory(
    config.TRAIN_PATH,
    class_mode="input",
    classes=None,
    target_size=(64, 64),
    color_mode="grayscale",
    shuffle=False,
    batch_size=BS)

early_stop = EarlyStopping(monitor='val_loss', patience=20)
mc = ModelCheckpoint('best_model_1.h5', monitor='val_loss', mode='min', save_best_only=True)

# construct our convolutional autoencoder
print("[INFO] building autoencoder...")
(encoder, decoder, autoencoder) = ConvAutoencoder.build(64, 64, 1)
opt = Adam(learning_rate= 0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-04, amsgrad=False)
autoencoder.compile(loss="mse", optimizer=opt)

# train the convolutional autoencoder
H = autoencoder.fit(    trainGen,   validation_data=valGen, epochs=EPOCHS, batch_size=BS ,callbacks=[ mc , early_stop])