NumPy中的加权标准差

时间:2010-03-09 23:53:36

标签: python numpy statsmodels standard-deviation weighted

numpy.average()有一个权重选项,但numpy.std()没有。有没有人有解决方法的建议?

5 个答案:

答案 0 :(得分:100)

以下简短的“手动计算”怎么样?

def weighted_avg_and_std(values, weights):
    """
    Return the weighted average and standard deviation.

    values, weights -- Numpy ndarrays with the same shape.
    """
    average = numpy.average(values, weights=weights)
    # Fast and numerically precise:
    variance = numpy.average((values-average)**2, weights=weights)
    return (average, math.sqrt(variance))

答案 1 :(得分:25)

statsmodels中有一个类可以轻松计算加权统计信息:statsmodels.stats.weightstats.DescrStatsW

假设这个数据集和权重:

import numpy as np
from statsmodels.stats.weightstats import DescrStatsW

array = np.array([1,2,1,2,1,2,1,3])
weights = np.ones_like(array)
weights[3] = 100

您初始化该类(请注意,您必须传递校正因子,此时为delta degrees of freedom):

weighted_stats = DescrStatsW(array, weights=weights, ddof=0)

然后你可以计算:

  • .mean 加权平均值

    >>> weighted_stats.mean      
    1.97196261682243
    
  • .std 加权标准差

    >>> weighted_stats.std       
    0.21434289609681711
    
  • .var 加权差异

    >>> weighted_stats.var       
    0.045942877107170932
    
  • .std_mean standard error加权平均值:

    >>> weighted_stats.std_mean  
    0.020818822467555047
    

    以防您对标准误差与标准差之间的关系感兴趣:标准误差(对于ddof == 0)计算为加权标准差除以总和的平方根权重减1(corresponding source for statsmodels version 0.9 on GitHub):

    standard_error = standard_deviation / sqrt(sum(weights) - 1)
    

答案 2 :(得分:6)

在numpy / scipy中似乎没有这样的功能,但是有一个ticket提出了这个增加的功能。其中包含Statistics.py,它实现了加权标准偏差。

答案 3 :(得分:2)

这里还有一个选择:

np.sqrt(np.cov(values, aweights=weights))

答案 4 :(得分:1)

gaborous提出了一个很好的例子:

import pandas as pd
import numpy as np
# X is the dataset, as a Pandas' DataFrame
mean = mean = np.ma.average(X, axis=0, weights=weights) # Computing the 
weighted sample mean (fast, efficient and precise)

# Convert to a Pandas' Series (it's just aesthetic and more 
# ergonomic; no difference in computed values)
mean = pd.Series(mean, index=list(X.keys())) 
xm = X-mean # xm = X diff to mean
xm = xm.fillna(0) # fill NaN with 0 (because anyway a variance of 0 is 
just void, but at least it keeps the other covariance's values computed 
correctly))
sigma2 = 1./(w.sum()-1) * xm.mul(w, axis=0).T.dot(xm); # Compute the 
unbiased weighted sample covariance

Correct equation for weighted unbiased sample covariance, URL (version: 2016-06-28)