为什么numpy.median这么好?

时间:2018-06-17 18:44:37

标签: python numpy time-complexity

我最近在求职面试中遇到的一个问题是:

Write a data structure that supports two operations.
1. Adding a number to the structure.
2. Calculating the median.
The operations to add a number and calculate the median must have a minimum time complexity.

我的实现非常简单,基本上保持元素排序,这样添加元素成本O(log(n))而不是O(1),但中位数是O(1)而不是O(n * log (n))的

我还添加了一个天真的实现,但包含numpy数组中的元素:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from random import randint, random
import math
from time import time

class MedianList():
    def __init__(self, initial_values = []):
        self.values = sorted(initial_values)
        self.size = len(initial_values)

    def add_element(self, element):
        index = self.find_pos(self.values, element)
        self.values = self.values[:index] + [element] + self.values[index:]
        self.size += 1

    def find_pos(self, values, element):
        if len(values) == 0: return 0
        index = int(len(values)/2)
        if element > values[index]: 
            return self.find_pos(values[index+1:], element) + index +  1
        if element < values[index]:
            return self.find_pos(values[:index], element)
        if element == values[index]: return index

    def median(self):
        if self.size == 0: return np.nan
        split = math.floor(self.size/2)
        if self.size % 2 == 1:
            return self.values[split]
        try:
            return (self.values[split] + self.values[split-1])/2
        except:
            print(self.values, self.size, split)

class NaiveMedianList():
    def __init__(self, initial_values = []):
        self.values = sorted(initial_values)

    def add_element(self, element):
        self.values.append(element)

    def median(self):
        split = math.floor(len(self.values)/2)
        sorted_values = sorted(self.values)
        if len(self.values) % 2 == 1:
            return sorted_values[split]
        return (sorted_values[split] + sorted_values[split-1])/2

class NumpyMedianList():
    def __init__(self, initial_values = []):
        self.values = np.array(initial_values)

    def add_element(self, element):
        self.values = np.append(self.values, element)

    def median(self):
        return np.median(self.values)

def time_performance(median_list, total_elements = 10**5):
    elements = [randint(0, 100) for _ in range(total_elements)]
    times = []
    start = time()
    for element in elements:
        median_list.add_element(element)
        median_list.median()
        times.append(time() - start)
    return times

ml_times = time_performance(MedianList())
nl_times = time_performance(NaiveMedianList())
npl_times = time_performance(NumpyMedianList())
times = pd.DataFrame()
times['MedianList'] = ml_times
times['NaiveMedianList'] = nl_times
times['NumpyMedianList'] = npl_times
times.plot()
plt.show()

以下是10 ^ 4个元素的表现效果: enter image description here

对于10 ^ 5个元素,天真的numpy实现实际上更快:

enter image description here

我的问题是: 怎么会?即使numpy通过常数因子更快,如果它们不保持数组的排序版本,它们的中值函数如何缩放得如此之好?

1 个答案:

答案 0 :(得分:4)

我们可以检查Numpy源代码中的mediansource):

def median(a, axis=None, out=None, overwrite_input=False, keepdims=False):
    ...

    if overwrite_input:
        if axis is None:
            part = a.ravel()
            part.partition(kth)
        else:
            a.partition(kth, axis=axis)
            part = a
    else:
        part = partition(a, kth, axis=axis)

...

关键功能是partition,它在docs中使用introselect。正如@zython所评论的那样,这是Quickselect的变体,可显着提高性能。