从Pyspark的dataframe列收集数据时出错

时间:2019-07-09 22:16:54

标签: apache-spark pyspark apache-spark-sql pyspark-sql

我正在使用Pyspark(带有Spark 2.4的Python 3.7),并且有一小段代码可以从Dataframe中的属性之一收集日期,我可以从pyspark命令行运行相同的代码,但是在我的生产代码中错误。

这是我在其中读取数据框“ df”并从“ job_id”字段中收集日期的代码行。

>>> run_dt = map( lambda r:r[0], df.filter((df['delivery_date'] == '2017-12-31')).select(max(substring(df['job_id'], 9, 10).cast("integer")).alias('last_run')).collect())[0]
>>> print(run_dt)
2017123101

同一行代码在评估时给我生产代码一个错误。错误消息是-

  File "C:\Users\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\dataframe.py", line 533, in collect
  File "C:\Users\spark-2.4.2-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
  File "C:\Users\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py", line 63, in deco
  File "C:\Users\spark-2.4.2-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o68.collectToPython.

0 个答案:

没有答案