我一定做错了,因为我的python脚本越来越慢了。在这个脚本中,我有2列,我只想要每个常见查询值得分更高。我用熊猫' groupby函数来做那个。然后我只保留每个查询得分> =最高得分的90%。
startTime = datetime.now()
data = pd.read_csv(inputfile,names =['query', 'score'],sep='\t')
print "INPUT INFORMATION"
print "Inputfile has:", "{:,}".format(data.shape[0]), "records"
print data.dtypes
print "Time test 1 :", str(datetime.now()-startTime)
data['max'] = data.groupby('queryid')['bitscore'].transform(lambda x: x.max())
print "Time test 2", str(datetime.now()-startTime)
data = data[data['bitscore']>=0.9*data['max']]
print "Time test 3", str(datetime.now()-startTime)
这是输出:
INPUT INFORMATION
Blast inputfile has: 1,367,808 records
queryid object
subjectid object
bitscore float64
dtype: object
Time test 1 : 0:00:05.075944
Time test 2 0:30:40.750674
Time test 3 0:30:41.317064
有很多记录,但仍然......计算机内存是100多演出。我昨天跑了它,花了26分钟才到达" test2"。现在已经过了30分钟。 你认为我应该擦干净python并重新安装吗?有人发生过这种事吗?
答案 0 :(得分:2)
为完整起见,使用0.14.1
In [13]: pd.set_option('max_rows',10)
In [9]: N = 1400000
In [10]: ngroups = 1000
In [11]: groups = [ "A%04d" % i for i in xrange(ngroups) ]
In [12]: df = DataFrame(dict(A = np.random.choice(groups,size=N,replace=True), B = np.random.randn(N)))
In [14]: df
Out[14]:
A B
0 A0722 0.621374
1 A0390 -0.843030
2 A0897 -1.633165
3 A0546 0.483448
4 A0366 1.866380
... ... ...
1399995 A0515 -1.051668
1399996 A0591 -1.216455
1399997 A0766 -0.914020
1399998 A0635 0.258893
1399999 A0577 1.874328
[1400000 rows x 2 columns]
In [15]: df.groupby('A')['B'].transform('max')
Out[15]:
0 3.688245
1 3.829529
2 3.717359
...
1399997 4.213080
1399998 3.121092
1399999 2.990630
Name: B, Length: 1400000, dtype: float64
In [16]: %timeit df.groupby('A')['B'].transform('max')
1 loops, best of 3: 437 ms per loop
In [17]: ngroups = 10000
In [18]: groups = [ "A%04d" % i for i in xrange(ngroups) ]
In [19]: df = DataFrame(dict(A = np.random.choice(groups,size=N,replace=True), B = np.random.randn(N)))
In [20]: %timeit df.groupby('A')['B'].transform('max')
1 loops, best of 3: 1.43 s per loop
In [23]: ngroups = 100000
In [24]: groups = [ "A%05d" % i for i in xrange(ngroups) ]
In [25]: df = DataFrame(dict(A = np.random.choice(groups,size=N,replace=True), B = np.random.randn(N)))
In [27]: %timeit df.groupby('A')['B'].transform('max')
1 loops, best of 3: 10.3 s per loop
因此转换大致为O(number_of_groups)