字数少于5的行数

时间:2016-11-18 01:16:30

标签: python apache-spark pyspark

使用pyspark, 我想找到具有多个单词的行数< 5

我写了这段代码,但我无法弄清楚它有什么问题

from pyspark.sql import SparkSession, Row


spark = SparkSession.builder.master("spark://master:7077").appName('test').config(conf=SparkConf()).getOrCreate()
df = spark.read.text('text.txt')
rdd = df.rdd
print(df.count())
rdd1=rdd.filter(lambda line: len((line.split(" "))<5)).collect()
print(rdd1.count())

This is the a small part of the Error
-----------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
    <ipython-input-48-27233afa0b82> in <module>()
          9 rdd = df.rdd
         10 print(df.count())
    ---> 11 rdd1=rdd.filter(lambda line: len((line.split(" "))<5)).collect()
         12 print(rdd1.count())
         13 
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 144.0 failed 1 times, most recent failure: Lost task 0.0 in stage 144.0 (TID 144, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/Users/ff/spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 1497, in __getattr__
    idx = self.__fields__.index(item)
ValueError: 'split' is not in list

2 个答案:

答案 0 :(得分:0)

我认为你在这个表达式的错误位置有一些括号:

rdd1=rdd.filter(lambda line: len((line.split(" "))<5)).collect()

你拥有它的方式,你就是这样做的:

len(... < 5)

而不是:

len(...) < 5

答案 1 :(得分:0)

我解决了。问题是我试图拆分列表。 这是新行

rdd=rdd.filter(lambda line: len(line[0].split(" "))<5).collect()