在pyspark中计算名称/姓氏的tf-idf

时间:2016-02-09 09:25:41

标签: apache-spark pyspark tf-idf

我有以下RDD(样本):

names_rdd.take(3)
[u'Daryll Dickenson', u'Dat Naijaboi', u'Duc Dung Lam']

我正在尝试计算tf_idf:

from pyspark.mllib.feature import HashingTF,IDF
hashingTF = HashingTF()
tf_names = hashingTF.transform(names_rdd)
tf_names.cache()
idf_names =IDF().fit(tf_names)
tfidf_names = idf_names.transform(tf_names)

我不明白为什么tf_names.take(3)会给出这些结果:

[SparseVector(1048576, {60275: 1.0, 134386: 1.0, 145380: 1.0, 274465: 1.0, 441832: 1.0, 579064: 1.0, 590058: 1.0, 664173: 2.0, 812399: 2.0, 845381: 2.0, 886510: 1.0, 897504: 1.0, 1045730: 1.0}),
 SparseVector(1048576, {208501: 1.0, 274465: 1.0, 441832: 2.0, 515947: 1.0, 537935: 1.0, 845381: 1.0, 886510: 1.0, 897504: 3.0, 971619: 1.0}),
 SparseVector(1048576, {274465: 2.0, 282612: 2.0, 293606: 1.0, 389709: 1.0, 738284: 1.0, 812399: 1.0, 845381: 2.0, 897504: 1.0, 1045730: 1.0})]

不应该每行都有2个值,例如:

[SparseVector(1048576, {60275: 1.0, 134386: 1.0}),
 SparseVector(1048576, {208501: 1.0, 274465: 1.0}),
 SparseVector(1048576, {274365: 2.0, 282612: 2.0})]

1 个答案:

答案 0 :(得分:0)

我做错了是我每行都要分开单词并列出单词。像这样:

 def split_name(name):
     list_name = name.split(' ')
     list_name = [word.strip() for word in list_name]
     return list_name

 names = names_rdd.map(lambda name:split_name(name))

 hashingTF = HashingTF()
 tf_names = hashingTF.transform(names_rdd)
                   .
                   .
                   .