高效的列表操作

时间:2015-11-11 19:40:02

标签: python list function append

我有一个大矩阵(1,017,209行),我需要从中读出元素,对它们进行操作,并将结果收集到列表中。当我在10,000行甚至100,000行时,它会在合理的时间内完成,但是1,000,000则没有。这是我的代码:

import pandas as pd

data = pd.read_csv('scaled_train.csv', index_col=False, header=0)
new = data.as_matrix()

def vectorized_id(j):
    """Return a 1115-dimensional unit vector with a 1.0 in the j-1'th position
    and zeroes elsewhere.  This is used to convert the store ids (1...1115)
    into a corresponding desired input for the neural network.
    """
    j = j - 1    
    e = [0] * 1115
    e[j] = 1.0
    return e

def vectorized_day(j):
    """Return a 7-dimensional unit vector with a 1.0 in the j-1'th position
    and zeroes elsewhere.  This is used to convert the days (1...7)
    into a corresponding desired input for the neural network.
    """
    j = j - 1
    e = [0] * 7
    e[j] = 1.0
    return e

list_b = []
list_a = []

for x in xrange(0,1017209):
    a1 = vectorized_id(new[x][0])
    a2 = vectorized_day(new[x][1])
    a3 = [new[x][5]]
    a = a1 + a2 + a3
    b = new[x][3]
    list_a.append(a)
    list_b.append(b)

是什么让它在那个规模上变慢(瓶颈是什么)?有没有办法优化它?

1 个答案:

答案 0 :(得分:1)

有几件事:

  1. 不要一次读完整个文件,你似乎没有做任何需要多行的事情。
  2. 使用csv.reader加载数据。
  3. 真正停止在巨型new列表中建立索引。