TypeError:'PipelinedRDD'类型的对象没有len()

时间:2016-03-21 07:02:52

标签: pyspark

大家好!我在spark

中运行python代码时遇到此错误日志

代码:

主要的.py

sc = SparkContext(appName="newsCluster")
sc.addPyFile("/home/warrior/gitDir/pysparkCode/clusterNews/wordSeg.py")
sc.addPyFile("/home/warrior/gitDir/pysparkCode/clusterNews/sparkClusterAlgorithm.py")
wordseg = wordSeg()
clustermanage = sparkClusterAlgorithm()
files = sc.wholeTextFiles("hdfs://warrior:9000/testData/skynews")
file_list = files.map(lambda item: item[1])
file_wc_dict_list = file_list.map(lambda file_content:wordseg.GetWordCountTable(file_content))
file_wc_dict_list.persist()
all_word_dict = wordseg.updateAllWordList(file_wc_dict_list)

wordSeg.py

def updateAllWordList(self, newWordCountDictList):
    '''
        description: input an new file then update the all word list
        input:
            newWordCountDict: new input string word count dict
        output:
            all_word_dict
    '''
    n = len(newWordCountDictList)
    all_word_list = []
    all_word_dict = {}
    for i in range(0,n):
        all_word_list = list(set(all_word_list + newWordCountDictList[i].keys()))
    for i in range(0,len(all_word_list)):
        all_word_dict[all_word_list[i]]=0
    return all_word_dict

....... .......

当spark-submit main .py时 输出错误日志:

Traceback (most recent call last):                                              
File "/home/warrior/gitDir/pysparkCode/clusterNews/__main__.py", line 31, in <module>
  all_word_dict =     wordseg.updateAllWordList(file_wc_dict_list)#file_wc_dict_list.map(lambda     file_wc_dict:wordseg.updateAllWordList(file_wc_dict))
File "/home/warrior/gitDir/pysparkCode/clusterNews/wordSeg.py", line 54, in updateAllWordList
  n = len(newWordCountDictList)
TypeError: object of type 'PipelinedRDD' has no len()

如何解决!! 谢谢!

1 个答案:

答案 0 :(得分:4)

newWordCountDictList是RDD(分布式对象,位于多个工作节点中)对象,而不是驱动程序中的本地集合对象。

您可以使用

n = newWordCountDictList.count()

all_word_dict = wordseg.updateAllWordList(file_wc_dict_list.collect())

获得正确的结果。