IndexError:TensorFlow中的列表索引超出范围

时间:2018-11-21 13:46:55

标签: python tensorflow tflearn

我收到一个错误,IndexError:列表索引超出范围。跟踪说

Run id: P0W5X0
Log directory: /tmp/tflearn_logs/
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/local/Cellar/python@2/2.7.15/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/local/Cellar/python@2/2.7.15/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/Users/xxx/anaconda/xxx/lib/python2.7/site-packages/tflearn/data_flow.py", line 201, in fill_batch_ids_queue
    ids = self.next_batch_ids()
  File "/Users/xxx/anaconda/xxx/lib/python2.7/site-packages/tflearn/data_flow.py", line 215, in next_batch_ids
    batch_start, batch_end = self.batches[self.batch_index]
IndexError: list index out of range

我写了代码,

# coding: utf-8
import tensorflow as tf
import tflearn

from tflearn.layers.core import input_data,dropout,fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression

tf.reset_default_graph()
net = input_data(shape=[None,20000, 4, 42])
net = conv_2d(net, 4, 16, activation='relu')
net = max_pool_2d(net, 1)
net = tflearn.activations.relu(net)
net = dropout(net, 0.5)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='adam', learning_rate=0.5, loss='categorical_crossentropy')

model = tflearn.DNN(net)

model.fit(np.array(trainDataSet).reshape(1,20000, 4, 42), np.array(trainLabel), n_epoch=400, batch_size=32, validation_set=0.1, show_metric=True)


pred = np.array(model.predict(np.array(testDataSet).reshape(1,20000, 4, 42)).argmax(axis=1))

label = np.array(testLabel).argmax(axis=0)
accuracy = np.mean(pred == label, axis=0)

print(accuracy)

我真的不明白为什么会发生这样的错误。我试图重写为

model.fit(np.array(trainDataSet).reshape(1,20000, 4, 42), np.array(trainLabel), n_epoch=400, batch_size=1, validation_set=0.1, show_metric=True) 

因为巴赫(Bach)导致此错误,但发生相同的错误。我在此部分重写了另一个数字,但也发生了相同的错误。我的代码有什么问题?应如何解决?

2 个答案:

答案 0 :(得分:0)

我也遇到了同样的问题。我的解决方案是使n_epoch的数量等于数据集的行数。例如,我的数组的形状为461 * 5,n_epoch的值为461。您还可以使该值比行的编号大一点或短一点。在我的代码中,500或400也是有用的。

答案 1 :(得分:0)

问题

如何解决列表索引超出范围的错误?

答案

从您的代码看来,您要传递到神经网络的训练和测试集只有1个元素,形状为20000x4x42的reshape(1,20000,4,42)。我相信您的意思是拥有20000个4x42元素。

让我们使用reshape(1,20000, 4, 42)代替reshape(20000, 4, 42, 1)。我们还必须将input_data(shape=[None, 20000, 4, 42])更改为input_data(shape=[None, 4, 42, 1])

如果这样做,您的代码可以正常工作。

工作代码

# coding: utf-8
import tensorflow as tf
import tflearn

from tflearn.layers.core import input_data,dropout,fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression

tf.reset_default_graph()
net = input_data(shape=[None, 4, 42, 1])
net = conv_2d(net, 4, 16, activation='relu')
net = max_pool_2d(net, 1)
net = tflearn.activations.relu(net)
net = dropout(net, 0.5)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='adam', learning_rate=0.5, loss='categorical_crossentropy')

model = tflearn.DNN(net)

model.fit(np.array(trainDataSet).reshape(20000, 4, 42, 1), np.array(trainLabel), n_epoch=400, batch_size=32, validation_set=0.1, show_metric=True)


pred = np.array(model.predict(np.array(testDataSet).reshape(20000, 4, 42, 1)).argmax(axis=1))

label = np.array(testLabel).argmax(axis=0)
accuracy = np.mean(pred == label, axis=0)

print(accuracy)

输出

要使上述代码生效,我们必须包含一些培训和测试数据。像这样使用Numpy随机

import numpy as np

trainDataSet = np.random.rand(20000, 4, 42)
trainLabel = ( np.random.rand(20000,2) > .5 ) *1.0

testDataSet = np.random.rand(20000, 4, 42)
testLabel = ( np.random.rand(20000,2) > .5 ) *1.0

这是输出

Run id: JDSG88
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 18000
Validation samples: 2000
--
Training Step: 563  | total loss: 12.13387 | time: 5.312s
| Adam | epoch: 001 | loss: 12.13387 - acc: 0.7138 | val_loss: 11.90437 - val_acc: 0.7400 -- iter: 18000/18000
--
Training Step: 1126  | total loss: 11.58909 | time: 5.184s
| Adam | epoch: 002 | loss: 11.58909 - acc: 0.7496 | val_loss: 11.90437 - val_acc: 0.7400 -- iter: 18000/18000
--
Training Step: 1689  | total loss: 11.93482 | time: 5.174s
| Adam | epoch: 003 | loss: 11.93482 - acc: 0.7357 | val_loss: 11.90437 - val_acc: 0.7400 -- iter: 18000/18000
--
...