我有一个三维数组(np.ndarray),其中大部分都是0。现在我想在第一个维度上总结它们,但这很慢。我研究过csr_matrix,但csr不支持3维数组。是否有更快的方法来总结一个几乎稀疏的nd数组?以下是我当前代码的摘录。
相关问题: sparse 3d matrix/array in Python?(创建一个自制的稀疏ndarray类,矫枉过正吗?)
r = np.array([ [[1, 0, 0, 0],
[1, 0, 0, 0],
[0, 0, 1, 0]],
[[0, 1, 0, 0],
[0, 0, 0, 1],
[0, 0, 2, 0]],
[[0, 1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[0, 0, 0, 1],
[0, 0, 0, 0],
[0, 0, 0, 0]]], dtype=int)
np.sum(r,axis=0)
Out[35]:
array([[1, 2, 0, 1],
[1, 0, 0, 1],
[0, 0, 3, 0]])
修改
在hpaulj的回答下面,我做了一些更多的时间测试,见下文。似乎重塑不会对总和产生很大的好处,同时将它们转换为csr_matrix并回到numpy会杀死性能。我仍然在考虑直接使用索引(下面称为rand_persons
,rand_articles
和rand_days
,因为在我原来的问题中,我使用这些索引制作了大的ndarray。
from timeit import timeit
from scipy.sparse import csr_matrix
import numpy as np
def create_test_data():
'''
dtype = int64
1% nonzero, 1000x1000x100: 1.3 s,
1% nonzero, 10000x1000x100: 13.3 s
0.1% nonzero, 10000x1000x100: 2.7 s
1ppm nonzero, 10000x1000x100: 0.007 s
'''
global purchases
N_persons = 10000
N_articles = 1000
N_days = 100
purchases = np.zeros(shape=(N_days, N_persons, N_articles), dtype=int)
N_elements = N_persons * N_articles * N_days
rand_persons = np.random.choice(a=range(N_persons), size=N_elements / 1e6, replace=True)
rand_articles = np.random.choice(a=range(N_articles), size=N_elements / 1e6, replace=True)
rand_days = np.random.choice(a=range(N_days), size=N_elements / 1e6, replace=True)
for (i, j, k) in zip(rand_persons, rand_articles, rand_days):
purchases[k, i, j] += 1
def sum_over_first_dim_A():
'''
0.1% nonzero, 10000x1000x99: 1.57s (average over 10)
1ppm nonzero, 10000x1000x99: 1.70s (average over 10)
'''
global purchases
d = purchases[:99, :, :]
np.sum(d, axis=0)
def sum_over_first_dim_B():
'''
0.1% nonzero, 10000x1000x99: 1.55s (average over 10)
1ppm nonzero, 10000x1000x99: 1.37s (average over 10)
'''
global purchases
d = purchases[:99, :, :]
(N_days, N_persons, N_articles) = d.shape
d.reshape(N_days, -1).sum(0).reshape(N_persons, N_articles)
def sum_over_first_dim_C():
'''
0.1% nonzero, 10000x1000x99: 7.54s (average over 10)
1ppm nonzero, 10000x1000x99: 7.44s (average over 10)
'''
global purchases
d = purchases[:99, :, :]
(N_days, N_persons, N_articles) = d.shape
r = csr_matrix(d.reshape(N_days, -1))
t = r.sum(axis=0)
np.reshape(t, newshape=(N_persons, N_articles))
if __name__ == '__main__':
print (timeit(create_test_data, number=10))
print (timeit(sum_over_first_dim_A, number=10))
print (timeit(sum_over_first_dim_B, number=10))
print (timeit(sum_over_first_dim_C, number=10))
编辑2
我现在找到了一种更快速的求和方法:我用稀疏矩阵制作一个numpy数组。但是,在最初创建这些矩阵时仍有一段时间。我现在用循环做这个。有没有办法加快速度呢?
def create_test_data():
[ ... ]
'''
0.1% nonzero, 10000x1000x100: 2.1 s
1ppm nonzero, 10000x1000x100: 0.45 s
'''
global sp_purchases
sp_purchases = np.empty(N_days, dtype=lil_matrix)
for i in range(N_days):
sp_purchases[i] = lil_matrix((N_persons, N_articles))
for (i, j, k) in zip(rand_persons, rand_articles, rand_days):
sp_purchases[k][i, j] += 1
def sum_over_first_dim_D():
'''
0.1% nonzero, 10000x1000x99: 0.47s (average over 10)
1ppm nonzero, 10000x1000x99: 0.41s (average over 10)
'''
global sp_purchases
d = sp_purchases[:99]
np.sum(d)
答案 0 :(得分:1)
你可以重新塑造数组,使其为2d,做总和,然后回归
r.reshape(4,-1).sum(0).reshape(3,4) # == r.sum(0)
重塑不会增加太多的处理时间。你可以将2d转换为稀疏,并查看是否可以节省时间。我的猜测是你的阵列必须非常大,而且非常稀疏,才能击败直线numpy
总和。如果你有其他理由使用稀疏格式,它可能是值得的,但只是做这个总和,不。但是自己测试一下。
答案 1 :(得分:0)
由于您的数据已经是稀疏格式(索引和值),您可以自己完成总结。只需创建一个与最终求和数组大小相同的数组,并循环索引,将相应的值汇总到正确的插槽中。下面的sum2d
函数显示了如何执行此操作,因为您在第一维上进行求和:
import timeit
import numpy as np
n = 1000
s = 1000
inds = np.random.randint(0, n, size=(s, 3))
vals = np.random.normal(size=s)
def sum3d():
a = np.zeros((n, n, n))
for [i, j, k], v in zip(inds, vals):
a[i, j, k] = v
return a.sum(axis=0)
def sum2d():
b = np.zeros((n, n))
for [i, j, k], v in zip(inds, vals):
b[j, k] += v
return b
kwargs = dict(repeat=3, number=1)
print(min(timeit.repeat('sum3d()', 'from __main__ import sum3d', **kwargs)))
print(min(timeit.repeat('sum2d()', 'from __main__ import sum2d', **kwargs)))
assert np.allclose(sum3d(), sum2d())