在Python

时间:2017-09-17 02:55:18

标签: python algorithm scipy statistics correlation

我的数据是一组 n 观察到的对及其频率,即每对(x i ,y i 对应一些 k i ,次数(x i ,y i < / sub>)被观察到。理想情况下,我想计算Kendall的tau和Spearman的rho,用于这些对的所有副本的集合,其中包括 k 1 + k 2 + ... + k n 对。问题是 k 1 + k 2 + ... + k n ,观察总数,是巨大的,这样的数据结构将不适合内存。

当然,我考虑过分配 i 对的频率, k i /(k 1 + k < sub> 2 + ... + k n ,作为其权重,加权集的计算等级相关性 - 但我找不到任何工具。在我遇到的等级相关的加权变量中(例如,scipy.stats.weightedtau),权重代表等级的重要性而不是对,这与我的原因无关。 Pearson的r似乎完全具有我需要的加权选项,但它不符合我的目的,因为 x y 无处于线性相关。我想知道我是否错过了关于加权数据点的广义相关的一些概念。

我到目前为止唯一的想法是缩小 k 1 ,k 2 ,...,k n 由一些公因子 c ,因此 i 对的缩放份数是 [k i / c] (这里 [。] 是舍入算子,因为我们需要每对的整数拷贝数)。通过选择 c 使得 [k 1 / c] + [k 2 / c] + ... + [k < sub> n / c] 对可以适合存储器,然后我们可以计算得到的集合的相关系数tau和rho。但是, k i k j 可以有很多个数量级,所以 c k i ,i>可能非常大,因此舍入 k i / c 会导致信息丢失。

UPD:可以在具有指定频率权重的数据集上计算Spearman的rho和p值,如下所示:

def frequency_pearsonr(data, frequencies):
    """
    Calculates Pearson's r between columns (variables), given the
    frequencies of the rows (observations).

    :param data: 2-D array with data
    :param frequencies: 1-D array with frequencies
    :return: 2-D array with pairwise correlations,
        2-D array with pairwise p-values
    """
    df = frequencies.sum() - 2
    Sigma = np.cov(data.T, fweights=frequencies)
    sigma_diag = Sigma.diagonal()
    Sigma_diag_pairwise_products = np.multiply.outer(sigma_diag, sigma_diag)
    # Calculate matrix with pairwise correlations.
    R = Sigma / np.sqrt(Sigma_diag_pairwise_products)
    # Calculate matrix with pairwise t-statistics. Main diagonal should
    # get 1 / 0 = inf.
    with np.errstate(divide='ignore'):
        T = R / np.sqrt((1 - R * R) / df)
    # Calculate matrix with pairwise p-values.
    P = 2 * stats.t.sf(np.abs(T), df)

    return R, P


def frequency_rank(data, frequencies):
    """
    Ranks 1-D data array, given the frequency of each value. Same
    values get same "averaged" ranks. Array with ranks is shaped to
    match the input data array.

    :param data: 1-D array with data
    :param frequencies: 1-D array with frequencies
    :return: 1-D array with ranks
    """
    s = 0
    ranks = np.empty_like(data)
    # Compute rank for each unique value.
    for value in sorted(set(data)):
        index_grid = np.ix_(data == value)
        # Find total frequency of the value.
        frequency = frequencies[index_grid].sum()
        ranks[index_grid] = s + 0.5 * (frequency + 1)
        s += frequency    

    return ranks


def frequency_spearmanrho(data, frequencies):
    """
    Calculates Spearman's rho between columns (variables), given the
    frequencies of the rows (observations).

    :param data: 2-D array with data
    :param frequencies: 1-D array with frequencies
    :return: 2-D array with pairwise correlations,
        2-D array with pairwise p-values
    """
    # Rank the columns.
    ranks = np.empty_like(data)
    for i, data_column in enumerate(data.T):
        ranks[:, i] = frequency_rank(data_column, frequencies)
    # Compute Pearson's r correlation and p-values on the ranks.
    return frequency_pearsonr(ranks, frequencies)


# Columns are variables and rows are observations, whose frequencies
# are specified.
data_col1 = np.array([1, 0, 1, 0, 1])
data_col2 = np.array([.67, .25, .75, .2, .6])
data_col3 = np.array([.1, .3, .8, .3, .2])
data = np.array([data_col1, data_col2, data_col3]).T
frequencies = np.array([2, 4, 1, 3, 2])

# Same data, but with observations (rows) actually repeated instead of
# their frequencies being specified.
expanded_data_col1 = np.array([1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1])
expanded_data_col2 = np.array([.67, .67, .25, .25, .25, .25, .75, .2, .2, .2, .6, .6])
expanded_data_col3 = np.array([.1, .1, .3, .3, .3, .3, .8, .3, .3, .3, .2, .2])
expanded_data = np.array([expanded_data_col1, expanded_data_col2, expanded_data_col3]).T

# Compute Spearman's rho for data in both formats, and compare.
frequency_Rho, frequency_P = frequency_spearmanrho(data, frequencies)
Rho, P = stats.spearmanr(expanded_data)
print(frequency_Rho - Rho)
print(frequency_P - P)

上面的特定示例显示两种方法都产生相同的相关性和相同的p值:

[[  0.00000000e+00   0.00000000e+00   0.00000000e+00]
 [  1.11022302e-16   0.00000000e+00  -5.55111512e-17]
 [  0.00000000e+00  -5.55111512e-17   0.00000000e+00]]
[[  0.00000000e+00  -1.35525272e-19   4.16333634e-17]
 [ -9.21571847e-19   0.00000000e+00  -5.55111512e-17]
 [  4.16333634e-17  -5.55111512e-17   0.00000000e+00]]

1 个答案:

答案 0 :(得分:0)

保罗建议的计算Kendall's tau的方法很有效。您不必将已排序数组的索引指定为排名,未排序的索引也可以同样正常工作(如加权tau示例中所示)。权重也不需要标准化。

常规(未加权)Kendall的tau(在&#34;扩展&#34;数据集上):

stats.kendalltau([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
                 [.25, .25, .25, .25, .2, .2, .2, .667, .667, .75, .6, .6])
KendalltauResult(correlation=0.7977240352174656, pvalue=0.0034446936330652677)

加权Kendall的tau(在出现次数为权重的数据集上):

stats.weightedtau([1, 0, 1, 0, 1],
                  [.667, .25, .75, .2, .6],
                  rank=False,
                  weigher=lambda r: [2, 4, 1, 3, 2][r],
                  additive=False)
WeightedTauResult(correlation=0.7977240352174656, pvalue=nan)

现在,由于weightedtau实现的特殊性,p值永远不会被计算出来。我们可以用最初提供的缩减事件的技巧来近似p值,但我非常欣赏其他方法。基于可用内存量的算法行为做出决定对我来说就像是痛苦。