使用Python的Apache Spark TFIDF

时间:2016-04-02 16:55:38

标签: python apache-spark pyspark apache-spark-mllib

Spark文档声明使用HashingTF功能,但我不确定转换函数期望输入的内容。 http://spark.apache.org/docs/latest/mllib-feature-extraction.html#tf-idf

我尝试运行教程代码:

from pyspark import SparkContext
from pyspark.mllib.feature import HashingTF

sc = SparkContext()

# Load documents (one per line).
documents = sc.textFile("...").map(lambda line: line.split(" "))

hashingTF = HashingTF()
tf = hashingTF.transform(documents)

但是我收到以下错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/salloumm/spark-1.6.0-bin-hadoop2.6/python/pyspark/ml/pipeline.py", line 114, in transform
    return self._transform(dataset)
  File "/Users/salloumm/spark-1.6.0-bin-hadoop2.6/python/pyspark/ml/wrapper.py", line 148, in _transform
    return DataFrame(self._java_obj.transform(dataset._jdf), dataset.sql_ctx)
AttributeError: 'list' object has no attribute '_jdf'

1 个答案:

答案 0 :(得分:3)

根据您显示的错误,很明显您没有按照教程或使用问题中包含的代码。

此错误是使用from pyspark.ml.feature.HashingTF代替pyspark.mllib.feature.HashingTF的结果。只需清理您的环境并确保使用正确的导入。