列对象不可调用的火花

时间:2017-09-12 21:29:06

标签: apache-spark pyspark

我尝试安装spark并运行教程中给出的命令,但得到以下错误 -

https://spark.apache.org/docs/latest/quick-start.html

P-MBP:spark-2.0.2-bin-hadoop2.4 prem$ ./bin/pyspark 
Python 2.7.13 (default, Apr  4 2017, 08:44:49) 
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
17/09/12 17:26:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.0.2
      /_/

Using Python version 2.7.13 (default, Apr  4 2017 08:44:49)
SparkSession available as 'spark'.
>>> textFile = spark.read.text("README.md")
>>> textFile.count()
99
>>> textFile.first()
Row(value=u'# Apache Spark')
>>> linesWithSpark = textFile.filter(textFile.value.contains("Spark"))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'Column' object is not callable


>>> dir(textFile.value)
['__add__', '__and__', '__bool__', '__class__', '__contains__', '__delattr__', '__dict__', '__div__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__init__', '__invert__', '__iter__', '__le__', '__lt__', '__mod__', '__module__', '__mul__', '__ne__', '__neg__', '__new__', '__nonzero__', '__or__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rsub__', '__rtruediv__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__weakref__', '_jc', 'alias', 'asc', 'astype', 'between', 'bitwiseAND', 'bitwiseOR', 'bitwiseXOR', 'cast', 'desc', 'endswith', 'getField', 'getItem', 'isNotNull', 'isNull', 'isin', 'like', 'name', 'otherwise', 'over', 'rlike', 'startswith', 'substr', 'when']

1 个答案:

答案 0 :(得分:3)

Spark 2.2中添加了

Column.contains方法(SPARK-19706)您正在使用Spark 2.0.2,因此它不存在且__getattr__(点语法)被解析为嵌套Column

您可以改为使用like

textFile.filter(textFile.value.like("%Spark%"))
相关问题