CountVectorizer提取功能

时间:2017-09-05 21:37:13

标签: apache-spark pyspark spark-dataframe pyspark-sql

我有以下Dataframe

+------------------------------------------------+
|filtered                                        |
+------------------------------------------------+
|[human, interface, computer]                    |
|[survey, user, computer, system, response, time]|
|[eps, user, interface, system]                  |
|[system, human, system, eps]                    |
|[user, response, time]                          |
|[trees]                                         |
|[graph, trees]                                  |
|[graph, minors, trees]                          |
|[graph, minors, survey]                         |
+------------------------------------------------+

在上面的列上运行CountVectorizer后,我得到以下输出

+------------------------------------------------+-------------------

--------------------------+
|filtered                                        |features                                     |
+------------------------------------------------+---------------------------------------------+
|[human, interface, computer]                    |(12,[4,7,9],[1.0,1.0,1.0])                   |
|[survey, user, computer, system, response, time]|(12,[0,2,6,7,8,11],[1.0,1.0,1.0,1.0,1.0,1.0])|
|[eps, user, interface, system]                  |(12,[0,2,4,10],[1.0,1.0,1.0,1.0])            |
|[system, human, system, eps]                    |(12,[0,9,10],[2.0,1.0,1.0])                  |
|[user, response, time]                          |(12,[2,8,11],[1.0,1.0,1.0])                  |
|[trees]                                         |(12,[1],[1.0])                               |
|[graph, trees]                                  |(12,[1,3],[1.0,1.0])                         |
|[graph, minors, trees]                          |(12,[1,3,5],[1.0,1.0,1.0])                   |
|[graph, minors, survey]                         |(12,[3,5,6],[1.0,1.0,1.0])                   |
+------------------------------------------------+---------------------------------------------+

现在我想在要素列上运行地图功能并将其转换为类似

的内容
+------------------------------------------------+--------------------------------------------------------+
|features                                        |transformed                                             |
+------------------------------------------------+--------------------------------------------------------+
|(12,[4,7,9],[1.0,1.0,1.0])                      |["1 4 1", "1 7 1", "1 9 1"]                             |
|(12,[0,2,6,7,8,11],[1.0,1.0,1.0,1.0,1.0,1.0])   |["2 0 1", "2 2 1", "2 6 1", "2 7 1", "2 8 1", "2 11 1"] |
|(12,[0,2,4,10],[1.0,1.0,1.0,1.0])               |["3 0 1", "3 2 1", "3 4 1", "3 10 1"]                   |
[TRUNCATED]

转换特征的方法是从特征中取出中间数组,然后从中创建子数组。例如,在features列的第1行和第1列中,我们有

(12,[4,7,9],[1.0,1.0,1.0])

现在取其中间数组[4,7,9]并将其频率与第三列进行比较,第三列为[1.0,1.0,1.0]前置" 1"因为它的第1行得到以下输出:

["1 4 1", "1 7 1", "1 9 1"]

通常看起来像这样:

["RowNumber MiddleFeatEl CorrespondingFreq", ....]

我无法通过应用地图与CountVectorizer生成的要素列分别提取上次频率列表功能:

以下是地图代码:

def corpus_create(feats):
    return feats[1] # Here i want to get [4,7,9] instead of 1 single feat score.

corpus_udf = udf(lambda feats: corpus_create(feats), StringType())
df3 = df.withColumn("corpus", corpus_udf("features"))

1 个答案:

答案 0 :(得分:1)

在Spark SQL中,行号基本上没有意义,但如果您不介意:

def f(x):
    row, i = x
    jvs = (
        # SparseVector
        zip(row.features.indices, row.features.values) if hasattr(row.features, "indices")
        # DenseVector
        else enumerate(row.features.toArray()))

    s = ["{} {} {}".format(i, j, v) 
        for j, v in jvs if v]
    return row + (s, )


df.rdd.zipWithIndex().map(f).toDF(df.columns + ["transformed"])