Tensorflow tflearn错误索引466超出轴129的大小129

时间:2018-10-31 20:03:33

标签: tensorflow tflearn

我有一个应用程序以128x128的频谱图像切片进行训练,当我尝试用model训练model.fit(train_x,train_y,..)时出现此错误

  

第187行,在slice_array中返回X [start]

     

IndexError:索引234超出轴127的大小127

我在网上寻找答案,并尝试添加reset_default_graph()np.array甚至什至在没有任何工作的情况下再次安装tflearn。预先谢谢你。

主要代码:

设置

slicesPath = "Spectogram/Style"
filesPerGenre = 2000
sliceSize = 128
validationRatio = 0.3
testRatio = 0.1
batchSize = 128
learningRate = 0.001
nbEpoch = 20
pixelPerSecond = 50

创建模型

Model = model.create_model(nbClasses, sliceSize)

创建或加载新数据集

train_X, train_Y, validation_X, validation_y = 
dataSet.getDataset(filesPerGenre, genres, sliceSize, validationRatio, 
testRatio, mode="train")

#Define run id for graphs
run_id = "MusicGenres - "+str(batchSize)+" 
"+''.join(random.SystemRandom().choice(string.ascii_uppercase) for _ in 
range(10))

#Train the model
print("[+] Training the model...")
Model.fit(train_X, train_Y, n_epoch=nbEpoch, batch_size=batchSize, 
shuffle=True, validation_set=(validation_X, validation_y), 
snapshot_step=100, show_metric=True, run_id=run_id)
print("Model trained! ✅")

#Save trained model
print("[+] Saving the weights...")
Model.save('musicDNN.tflearn')
print("[+] Weights saved! ✅")

模型功能

def create_model(nbClasses,imageSize):
print("[+] Creating model...")
convnet = input_data(shape=[None, imageSize, imageSize, 1], name='input')

convnet = conv_2d(convnet, 64, 2, activation='elu', weights_init="Xavier")
convnet = max_pool_2d(convnet, 2)

convnet = conv_2d(convnet, 128, 2, activation='elu', weights_init="Xavier")
convnet = max_pool_2d(convnet, 2)

convnet = conv_2d(convnet, 256, 2, activation='elu', weights_init="Xavier")
convnet = max_pool_2d(convnet, 2)

convnet = conv_2d(convnet, 512, 2, activation='elu', weights_init="Xavier")
convnet = max_pool_2d(convnet, 2)

convnet = fully_connected(convnet, 1024, activation='elu')
convnet = dropout(convnet, 0.5)

convnet = fully_connected(convnet, nbClasses, activation='softmax')
convnet = regression(convnet, optimizer='rmsprop', 
loss='categorical_crossentropy')

model = tflearn.DNN(convnet)
print("    Model created! ✅")
return model

数据集代码:

shuffle(data)

# Extract X and y
X, y = zip(*data)

# Split data
validationNb = int(len(X) * validationRatio)
testNb = int(len(X) * testRatio)
trainNb = len(X) - (validationNb + testNb)
print(trainNb)

# Prepare for Tflearn at the same time
train_X = np.array(X[:trainNb]).reshape([-1, sliceSize, sliceSize, 1])
train_y = np.array(y[:trainNb])
validation_X = np.array(X[trainNb:trainNb + validationNb]).reshape([-1, 
sliceSize, sliceSize, 1])
validation_y = np.array(y[trainNb:trainNb + validationNb])
test_X = np.array(X[-testNb:]).reshape([-1, sliceSize, sliceSize, 1])
test_y = np.array(y[-testNb:])
print("    Dataset created! ✅")

# Save
saveDataset(train_X, train_y, validation_X, validation_y, test_X, test_y, 
nbPerGenre, genres, sliceSize)

return train_X, train_y, validation_X, validation_y, test_X, test_y

0 个答案:

没有答案