LSTM输入数据准备和定义(Python)

时间:2019-10-14 08:47:34

标签: python tensorflow keras lstm spyder

每个时间步都有一个浮点数据。每个记录数据都是1000个浮点数的数组。时间不是记录时间,而是阵列中每次记录数据之间的时间为0.1秒。

我想准备数据并将其存储在适当的容器中,然后将其传递给LSTM模型。

一次没有指定确切的0.1秒持续时间,然后再次考虑持续时间。

我所做的:

我从.mat文件中读取数据,并将每个1000个float数组附加到列表中:

max_Value = []
min_Value = []
scale = 2.6
X_Data=[]
Y_Data=[]
epochs_Num = 10
batch_Num = 64
name_Model = f'{LSTM_{int(time.time())}'

for i_Path in origin_Data_Path:
    mat_Data = loadmat(i_Path)
    data     = mat_Data['data']
    X_Data.append(data[:,0])
    Y_Data.append(data[:,1])
 ...

opt = tf.keras.optimizers.Adam(lr=0.001,decay=1e-6)
ls  = tf.keras.losses.sparse_categorical_crossentropy

tensorboard = TensorBoard(log_dir=f'logs/{name_Model}')

myModel.compile(loss=ls,optimizer=opt,metrics=['accuracy'])

history = myModel.fit(X_Data,
                      labels_Categorical,
                      batch_size=batch_Num,
                      epochs=epochs_Num,
                      validation_split=0.2,
                      callbacks= [tensorboard]  )

myModel.save("myMod.h5")

错误:

  File "C:\Python\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
    execfile(filename, namespace)

  File "C:\Python\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "C:/Users/Light/Desktop/Gesture/Test01.py", line 139, in <module>
    callbacks= [tensorboard]  )

  File "C:\Python\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 728, in fit
    use_multiprocessing=use_multiprocessing)

  File "C:\Python\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 224, in fit
    distribution_strategy=strategy)

  File "C:\Python\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 516, in _process_training_inputs
    steps=steps_per_epoch)

  File "C:\Python\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2472, in _standardize_user_data
    exception_prefix='input')

  File "C:\Python\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_utils.py", line 565, in standardize_input_data
    'with shape ' + str(data_shape))

  ValueError: Error when checking input: expected lstm_input to have 3 dimensions, but got array with shape (1000, 1)

X_Data ; structure

Data Struct.

2 个答案:

答案 0 :(得分:0)

这是因为您需要为其输入能够进行批处理的输入。您已经给它(1000, 1)了,它似乎对应于您的数据的单个样本。 Keras希望它的名称为(batch/samples, time, channels),对您来说,它就是(number of samples, 1000, 1)。这是3个预期的维度。

检查如何构造输入数据。您不需要自己进行批量处理,fit()会为您将输入数据拆分为这些大小的批量处理。

答案 1 :(得分:0)

我意识到容器应该采用N维数组的形式。解决了模型输入形状的问题。但是存在精度低的问题。