评估大数据的数学表达式的性能

时间:2017-03-16 13:35:08

标签: python performance numpy

鉴于100,000个长度为1000的序列, 我试图计算,每个m~ [1,1000],下面的表达式保持的序列的百分比 -

|(Mean of first m numbers in the sequens) - 0.25 | >= 0.1

创建数据的方式:

data = np.random.binomial(1, 0.25, (100000, 1000))

我尝试了什么:

In Main Function:
    bad_sequence_percentage = []
    for l in range(0, sequence_length):
        bad_sequence_percentage.append(c(l+1, 0.1))  # (number of examples, epsilon)




The helping function:
def c(number_of_examples, curr_epsilon):
    print("number of examples: " + str(number_of_examples))
    num_of_bad_sequences = 0

    for i in range(0, num_of_sequences):
        if abs(np.mean(data[i][0:number_of_examples]) - 0.25) >= curr_epsilon:
            num_of_bad_sequences += 1

    print(str(number_of_examples) + " : " + str(num_of_bad_sequences))

    return num_of_bad_sequences / 100000

问题在于它需要很长时间 - 大约1米/秒。

有没有办法改变实施方式,以便花费更少的时间?

2 个答案:

答案 0 :(得分:1)

这是一种矢量化方法 -

avg = data.cumsum(1)/np.arange(1,data.shape[1]+1).astype(float)
curr_epsilon = 0.1
out = np.count_nonzero(np.abs(avg - 0.25) >= curr_epsilon,axis=0)/100000.0

涉及的步骤:

  • 利用cumsum来模拟不断递增的窗口平均计算。对于平均部分,我们只需要将累积求和除以范围(length_of_array)。这就形成了矢量化的基础。
  • 该部分的其余部分是一个简单端口,np.abs替换abs以支持NumPy支持的矢量化。然后,我们使用np.count_nonzero进行比较并获得计数。

运行时测试和验证

方法 -

def c(number_of_examples, curr_epsilon):
    num_of_sequences = data.shape[0]
    num_of_bad_sequences = 0
    for i in range(0, num_of_sequences):
        if abs(np.mean(data[i][0:number_of_examples]) - 0.25) >= curr_epsilon:
            num_of_bad_sequences += 1
    return num_of_bad_sequences / 100000.0

def original_approach(data):
    sequence_length = data.shape[1]
    bad_sequence_percentage = []
    for l in range(0, sequence_length):
        bad_sequence_percentage.append(c(l+1, 0.1))
    return bad_sequence_percentage

def vectorized_approach(data):
    avg = data.cumsum(1)/np.arange(1,data.shape[1]+1).astype(float)
    curr_epsilon = 0.1
    out = np.count_nonzero(np.abs(avg - 0.25) >= curr_epsilon,axis=0)/100000.0
    return out

计时

In [5]: data = np.random.binomial(1, 0.25, (1000, 1000))

In [6]: np.allclose(original_approach(data), vectorized_approach(data))
Out[6]: True

In [7]: %timeit original_approach(data)
1 loops, best of 3: 7.35 s per loop

In [8]: %timeit vectorized_approach(data)
100 loops, best of 3: 10.9 ms per loop

In [9]: 7350.0/10.9
Out[9]: 674.3119266055046

670x+ 加速!

使用更大的数据集:

In [4]: data = np.random.binomial(1, 0.25, (10000, 1000))

In [5]: %timeit original_approach(data)
1 loops, best of 3: 1min 15s per loop

In [6]: %timeit vectorized_approach(data)
10 loops, best of 3: 98.7 ms per loop

In [7]: 75000.0/98.7
Out[7]: 759.8784194528876

加速跳转到 750x+

我希望使用最初询问的数据集np.random.binomial(1, 0.25, (100000, 1000)),加速会更好。

答案 1 :(得分:1)

或者可以替换以下for循环

@Rest(url = "customer/list")
@ResponseBody
public String getCustomer()
{
    return "[]";
}

num_of_bad_sequences = 0
for i in range(0, num_of_sequences):
        if abs(np.mean(data[i][0:number_of_examples]) - 0.25) >= curr_epsilon:
            num_of_bad_sequences += 1