如何在张量流数据集中扩展每个批次的维度

时间:2020-10-28 08:05:35

标签: python tensorflow keras tensorflow-datasets cnn

我创建了一个tf.data数据集,但是,在尝试将其与顺序CNN模型进行匹配时,我一直遇到此错误。

 ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 28, 28]

此刻,我的火车数据集的格式为('x_train','y_train'),其中'x_train'中的每个批次都具有形状[32,28,28],而'y_train'中的每个批次都具有形状(32,)。如何在不更改“ y_train”中每个批次的形状的情况下,将每个“ x_train”批次的尺寸更改为[32,28,28,1]?

这是我的完整代码:

#imports
import tarfile
import numpy as np
import pandas as pd
import matplotlib
import tensorflow as tf

# Get Data



def get_images():

    """Get the fashion-mnist images.
    Returns
    -------
    (x_train, x_test) : tuple of uint8 arrays
        Grayscale image data with shape (num_samples, 28, 28)
    (y_train, y_test) : tuple of uint8 arrays
        Labels (integers in range 0-9) with shape (num_samples,)
    Examples
    --------
    >>> from reader import get_images
    >>> (x_train, y_train), (x_test, y_test) = get_images() 
    Notes
    -----
    The data is split into train and test sets as described in the original paper [1].
    References
    ----------
    1. Xiao H, Rasul K, Vollgraf R. Fashion-MNIST: a Novel Image Dataset for 
    Benchmarking Machine Learning Algorithms. CoRR [Internet]. 2017;abs/1708.07747.
    Available from: http://arxiv.org/abs/1708.07747
    """

    with tarfile.open('data.tar.gz', 'r') as f:
        f.extractall()

    df_train = pd.read_csv('fashion_mnist_train.csv')
    df_test = pd.read_csv('fashion_mnist_test.csv')

    x_train = df_train.drop('label', axis=1).to_numpy(np.uint8)
    y_train = df_train['label'].to_numpy(np.uint8)
    x_test = df_test.drop('label', axis=1).to_numpy(np.uint8)
    y_test = df_test['label'].to_numpy(np.uint8)

    return (x_train, y_train), (x_test, y_test)

(x_train,y_train),(x_test,y_test)=get_images()

clothing=['top','trouser','pullover','dress','coat','sandal','shirt','sneaker','bag','ankle boot']

BUFFER_SIZE=1000

BATCH_SIZE=32

#Reshape x_train and y_train, and scale them to the range [0,1]

new_x_train=[]
new_x_test=[]
for i,train in enumerate(x_train):
    #print(np.shape(train))
    arr=np.reshape(x_train[i],(28,28))
    arr=arr/255.0
    new_x_train.append(arr)
    
    
    
for i,test in enumerate(x_test):
    arr=np.reshape(x_test[i],(28,28))
    arr=arr/255.0
    new_x_test.append(arr)

train_dataset = tf.data.Dataset.from_tensor_slices((new_x_train,y_train)).shuffle(BUFFER_SIZE).batch(BATCH_SIZE,drop_remainder=True)

from keras.layers import LeakyReLU

CNN_model= tf.keras.Sequential()

#CNN_model.add(tf.keras.layers.Lambda(tf.py_function(expand_dims)))

CNN_model.add(tf.keras.layers.Conv2D(
    20, (5,5), strides=(1, 1), padding='valid',
    kernel_initializer='glorot_uniform'))

#CNN_model.add(tf.keras.layers(tf.keras.layers.Lambda(
 #   function)

              
CNN_model.add(LeakyReLU(alpha=0.05))
              
CNN_model.add(tf.keras.layers.MaxPool2D(
    pool_size=(2, 2), strides=None, padding='valid'))

CNN_model.add(tf.keras.layers.Conv2D(
    50, (3,3), strides=(1, 1), padding='valid',
    kernel_initializer='glorot_uniform'))

CNN_model.add(LeakyReLU(alpha=0.05))

CNN_model.add(tf.keras.layers.MaxPool2D(
    pool_size=(2, 2), strides=None, padding='valid'))  
              
CNN_model.add(tf.keras.layers.Conv2D(
    10, (1,1), strides=(1, 1), padding='valid',
    kernel_initializer='glorot_uniform'))

CNN_model.add(LeakyReLU(alpha=0.05))

CNN_model.add(tf.keras.layers.GlobalAveragePooling2D())

CNN_model.add(tf.keras.layers.Softmax(axis=-1))



CNN_model.compile(loss="sparse_categorical_crossentropy",
              optimizer="adam",
              metrics=["accuracy"])

CNN_history = CNN_model.fit(train_dataset, epochs=10)

2 个答案:

答案 0 :(得分:1)

您可以尝试使用此

arr=np.reshape(x_test[i],(1, 28,28))

代替此

arr=np.reshape(x_test[i],(28,28))

如果您使用的是最后一个频道,则可以将1作为第三个暗度。

答案 1 :(得分:0)

好吧,您可以做一个简单的扩展调暗:

import numpy as np
x_train = np.expand_dims(x_train, axis=-1)

但是奇怪的是,我能问您如何加载数据吗?生成器功能?