用于图像增强的Tensorflow使keras无法工作

时间:2019-04-26 15:06:54

标签: python tensorflow keras neural-network

我正在实现CNN用于图像分类;我使用keras随机选择了CNN架构

import keras
from keras.models import Sequential,Input,Model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import LeakyReLU

model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation="relu", input_shape=(n,n,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy'])



train = model.fit(train_X, train_label, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_label))

我正在尝试使用使用Tensorflow的代码进行图像增强,与使用keras ImageDataGenerator进行数据增强相比,我更喜欢此代码,因为它使我更具灵活性。



import tensorflow as tf


def rotate_images(X_imgs):
    X_rotate = []
    tf.reset_default_graph()
    X = tf.placeholder(tf.float32, shape = (n, n, 1))
    k = tf.placeholder(tf.int32)
    tf_img = tf.image.rot90(X, k = k)
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for img in X_imgs:
            for i in range(3):  # Rotation at 90, 180 and 270 degrees
                rotated_img = sess.run(tf_img, feed_dict = {X: img, k: i + 1})
                X_rotate.append(rotated_img)

    X_rotate = np.array(X_rotate, dtype = np.float32)
    return X_rotate




当我尝试拟合模型时,出现以下错误消息

InvalidArgumentError: Tensor dense_7_target:0, specified in either feed_devices or fetch_devices was not found in the Graph

看起来图是张量流所使用的东西,我认为我在keras和tansorflow之间交互不良;令人惊讶的是,我曾经能够运行我的模型一次,但现在又被打破了。

如果需要更多信息,请告诉我;谢谢你的帮助

2 个答案:

答案 0 :(得分:2)

不要使用tf.reset_default_graph(),您可以为函数创建一个新的临时图形:

import tensorflow as tf

def rotate_images(X_imgs):
    X_rotate = []
    with tf.Graph().as_default():
        X = tf.placeholder(tf.float32, shape = (n, n, 1))
        k = tf.placeholder(tf.int32)
        tf_img = tf.image.rot90(X, k = k)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for img in X_imgs:
                for i in range(3):  # Rotation at 90, 180 and 270 degrees
                    rotated_img = sess.run(tf_img, feed_dict = {X: img, k: i + 1})
                    X_rotate.append(rotated_img)
        X_rotate = np.array(X_rotate, dtype = np.float32)
        return X_rotate

答案 1 :(得分:1)

这可以通过使用TF 2.0来完成。下面,我使用keras将您的CNN模型从keras转换为TF 2.0,并在cifar10数据集上进行了测试。

from tensorflow.keras import datasets, layers, models
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from tensorflow.keras.losses import sparse_categorical_crossentropy
from tensorflow.keras.optimizers import Adam

(train_X, train_label), (valid_X, valid_label) = datasets.cifar10.load_data()
train_X, valid_X = train_X / 255.0, valid_X / 255.0

n = 32
num_classes = 10
batch_size = 32
epochs = 10

model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation="relu", input_shape=(n, n, 3)))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(5, 5), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=sparse_categorical_crossentropy, optimizer=Adam(), metrics=['accuracy'])
train = model.fit(train_X, train_label, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(valid_X, valid_label))

TF 2.0还为您提供了轻松实现图像增强的功能。下面,我有一个示例,说明如何通过将horizontal_flipvertical_flip参数设置为True来旋转数据集中的图像。

from tensorflow.keras.preprocessing.image import ImageDataGenerator

datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True)
model.fit_generator(datagen.flow(train_X, train_label, batch_size=batch_size), steps_per_epoch=len(train_X) / 32, epochs=epochs)