PySpark ChiSqSelector p值和测试统计

时间:2018-06-21 15:11:54

标签: python pyspark chi-squared

我正在使用PySpark的pyspark.ml.feature.ChiSqSelector进行功能选择。 apps是一列包含稀疏矩阵的列,这些矩阵对应于特定的name(机器)是否已安装特定的应用程序。总共可以安装21,615个可能的应用程序。

使用ChiSqSelector对象拟合和转换新数据后,我对selected_apps现在表示的内容感到困惑。该文档在这里没有帮助。我有几个问题:

1)如何获得与21,615个输入应用程序相关的卡方检验统计量和p值?通过查看dir(selector)似乎无法立即访问。

2)为什么selected_apps中显示了不同的应用程序?我的直觉是,第二排下面的计算机没有应用0、1、2等,因此该行在selected_apps中显示的是执行的前50个应用基于其p值。此API似乎与scikit-learns SelectKBest(chi2)的工作有很大不同,后者仅返回最重要的 k 个最重要的功能,而不管特定机器的该功能是否为“ 1”。

3)如何覆盖默认的numTopFeatures=50设置?这主要与问题1)有关,并且仅将p值用于特征选择。似乎没有numTopFeatures=-1类型的选项可以基本上“忘记”该参数。

>>> selector = ChiSqSelector(
...     featuresCol='apps',
...     outputCol='selected_apps',
...     labelCol='multiple_event',
...     fpr=0.05
... )
>>> result = selector.fit(df).transform(df)                                                                
>>> print(result.show())
+---------------+-----------+--------------+--------------------+--------------------+
|           name|total_event|multiple_event|                apps|       selected_apps|
+---------------+-----------+--------------+--------------------+--------------------+
|000000000000021|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000022|          0|             0|(21615,[3,6,7,8,9...|(50,[3,6,7,8,9,11...|
|000000000000023|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000024|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000025|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000026|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000027|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000028|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000029|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000030|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000031|          0|             0|(21615,[0,1,2,3,4...|(50,[0,1,2,3,4,6,...|
|000000000000032|          0|             0|(21615,[6,7,8,9,1...|(50,[6,7,8,9,13,1...|
|000000000000033|          0|             0|(21615,[0,1,2,3,4...|(50,[0,1,2,3,4,6,...|
|000000000000034|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000035|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000036|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000037|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000038|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000039|          0|             0|(21615,[0,1,2,3,6...|(50,[0,1,2,3,6,7,...|
|000000000000040|          0|             0|(21615,[0,1,2,3,4...|(50,[0,1,2,3,4,6,...|
+---------------+-----------+--------------+--------------------+--------------------+

1 个答案:

答案 0 :(得分:1)

我知道了。解决方案如下:

from pyspark.mllib.linalg import Vectors
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.stat import Statistics

# Convert everything to a LabeledPoint object, the main consumption
# data structure for most of mllib
to_labeled_point = lambda x: LabeledPoint(x[0], Vectors.dense(x[1].toArray()))

obs = (
    df
    .select('multiple_event', 'apps')
    .rdd
    .map(to_labeled_point)
)

# The contingency table is constructed from an RDD of LabeledPoint and used to conduct
# the independence test. Returns an array containing the ChiSquaredTestResult for every feature
# against the label.
feature_test_results = Statistics.chiSqTest(obs)

data = []

for idx, result in enumerate(feature_test_results):
    row = {
        'feature_index': idx,
        'p_value': result.pValue,
        'statistic': result.statistic,
        'degrees_of_freedom': result.degreesOfFreedom
    }
    data.append(row)