如何修复网络输入形状

时间:2020-09-21 09:15:17

标签: python tensorflow keras

有1275张图像,每个图像的尺寸为(128,19,1)。这些图像分为五个组,因此有255(1275/5)个样本,每个样本具有5个图像,数据的最终形状为(255,5,128,19,1)。此数据必须馈送到CONVLSTM2D网络,其代码如下。培训过程已完全完成,但在评估过程开始时,出现了以下错误。谢谢大家的帮助。

错误:

IndexError:列表索引超出范围

文件“”,第1行,在 runfile('D:/论文/论文3 /特征提取/two_dimension_Feature_extraction/stft_feature/Training_set/P300/Afrah_convlstm2d.py',wdir ='D:/ thesis /论文3 /特征提取/ two_dimension_Feature_extraction / stft_feature'

runfile中的文件“ C:\ Users \ pouyaandish \ AppData \ Local \ conda \ conda \ envs \ kafieh \ lib \ site-packages \ spyder_kernels \ customize \ spydercustomize.py”,行786 execfile(文件名,命名空间)

exec文件中的第110行“ C:\ Users \ pouyaandish \ AppData \ Local \ conda \ conda \ envs \ kafieh \ lib \ site-packages \ spyder_kernels \ customize \ spydercustomize.py” exec(compile(f.read(),文件名,'exec'),命名空间)

文件“ D:/论文/论文3 /功能提取/two_dimension_Feature_extraction/stft_feature/Training_set/P300/Afrah_convlstm2d.py”,第111行,在 test_loss,test_acc = seq.evaluate(test_data)

评估中的文件“ C:\ Users \ pouyaandish \ AppData \ Local \ conda \ conda \ envs \ kafieh \ lib \ site-packages \ keras \ engine \ training.py”,行1361 callbacks = callbacks)

文件“ C:\ Users \ pouyaandish \ AppData \ Local \ conda \ conda \ envs \ kafieh \ lib \ site-packages \ keras \ engine \ training_arrays.py”,行403,在test_loop中 如果是issparse(ins [i])而不是K.is_sparse(feed [i]):

IndexError:列表索引超出范围

#Importing libraries
#-------------------------------------------------
from PIL import Image
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import os
from matplotlib import pyplot as plt


#Data Preprocessing
#-----------------------------------------------------------------
Data = np.zeros((255,5,128,19,1),dtype=np.uint8)

image_folder = 'D:\\thesis\\Paper 3\\Feature Extraction\\two_dimension_Feature_extraction\\stft_feature\\Training_set\\P300'
images = [img for img in os.listdir(image_folder) if img.endswith(".png")]

for image in images:
    img = Image.open(image).convert('L')
    array = np.array(img)
    array = np.expand_dims(np.array(img), axis=2)
    for i in range(0, len(Data)):
        for j in range(0, 4):
            Data[i,j] = array

           

labels = np.zeros((2,len(Data)), dtype=np.uint8)
labels = np.transpose(labels)
for i in range(0, len(Data) ):
    if i <= 127:
        labels[i][0] = 1
    elif i > 127 :
        labels[i][1] = 1            
            
#Network Configuration
#--------------------------------------------------------------------------------------------------------------------------
seq = Sequential()
seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   input_shape=(5, 128, 19, 1),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(Flatten())
seq.add(Dense(output_dim = 128, activation = 'relu'))
seq.add(Dense(output_dim = 2, activation = 'relu'))
seq.compile(loss='binary_crossentropy', optimizer='adadelta', metrics=['acc'])

#Fit the Data on Model
#--------------------------------------------------------------------------------------
train_data_1 = Data[0:84]
train_data_2 = Data[127:212]
train_data = np.concatenate([train_data_1, train_data_2])
label_train_1 = labels[0:84]
label_train_2 = labels[127:212]
label_train = np.concatenate([label_train_1, label_train_2])

val_data_1 = Data[84:104]
val_data_2 = Data[212:232]
val_data = np.concatenate([val_data_1, val_data_2])
label_val_1 = labels[84:104]
label_val_2 = labels[212:232]
label_val = np.concatenate([label_val_1, label_val_2])


test_data_1 = Data[104:127]
test_data_2 = Data[232:]
test_data = np.concatenate([test_data_1, test_data_2])
label_test_1 = labels[104:127]
label_test_2 = labels[232:]
label_test = np.concatenate([label_test_1, label_test_2])


history = seq.fit(train_data,label_train, validation_data=( val_data, label_val), epochs = 2 , batch_size = 10)

#Visualize the Result
#---------------------------------------------------------------------------------------
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.plot()
plt.legend()
plt.show()
#Evaluate Model on test Data
#----------------------------------------------------------------------------------------------
test_loss, test_acc = seq.evaluate(test_data)
print('test_acc:', test_acc)





     

1 个答案:

答案 0 :(得分:0)

问题出在最后,当您评估模型时,您只是忘记给出private Either<String,String> processQuery(String[] args){ //code logic } private void reply(String[] args){ //code logic var either = processQuery(args); return either.fold((l){ //returned result is answer },(r){ //returned result is response }); } 参数。此修改可以解决问题:

y