我有时间戳数据集,其格式为
我在pyspark中编写了一个udf来处理这个数据集并返回键值的Map。但我收到以下错误信息。
数据集:df_ts_list
+--------------------+
| ts_list|
+--------------------+
|[1477411200, 1477...|
|[1477238400, 1477...|
|[1477022400, 1477...|
|[1477224000, 1477...|
|[1477256400, 1477...|
|[1477346400, 1476...|
|[1476986400, 1477...|
|[1477321200, 1477...|
|[1477306800, 1477...|
|[1477062000, 1477...|
|[1477249200, 1477...|
|[1477040400, 1477...|
|[1477090800, 1477...|
+--------------------+
Pyspark UDF:
>>> def on_time(ts_list):
... import sys
... import os
... sys.path.append('/usr/lib/python2.7/dist-packages')
... os.system("sudo apt-get install python-numpy -y")
... import numpy as np
... import datetime
... import time
... from datetime import timedelta
... ts = np.array(ts_list)
... if ts.size == 0:
... count = 0
... duration = 0
... st = time.mktime(datetime.now())
... ymd = str(datetime.fromtimestamp(st).date())
... else:
... ts.sort()
... one_tag = []
... start = float(ts[0])
... for i in range(len(ts)):
... if i == (len(ts)) - 1:
... end = float(ts[i])
... a_round = [start, end]
... one_tag.append(a_round)
... else:
... diff = (datetime.datetime.fromtimestamp(float(ts[i+1])) - datetime.datetime.fromtimestamp(float(ts[i])))
... if abs(diff.total_seconds()) > 3600:
... end = float(ts[i])
... a_round = [start, end]
... one_tag.append(a_round)
... start = float(ts[i+1])
... one_tag = [u for u in one_tag if u[1] - u[0] > 300]
... count = int(len(one_tag))
... duration = int(np.diff(one_tag).sum())
... ymd = str(datetime.datetime.fromtimestamp(time.time()).date())
... return {'count':count,'duration':duration, 'ymd':ymd}
Pyspark代码:
>>> on_time=udf(on_time, MapType(StringType(),StringType()))
>>> df_ts_list.withColumn("one_tag", on_time("ts_list")).select("one_tag").show()
错误:
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/lib/spark/python/pyspark/worker.py", line 172, in main process() File "/usr/lib/spark/python/pyspark/worker.py", line 167, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/usr/lib/spark/python/pyspark/worker.py", line 106, in <lambda> func = lambda _, it: map(mapper, it) File "/usr/lib/spark/python/pyspark/worker.py", line 92, in <lambda> mapper = lambda a: udf(*a) File "/usr/lib/spark/python/pyspark/worker.py", line 70, in <lambda> return lambda *a: f(*a) File "<stdin>", line 27, in on_time File "/usr/lib/spark/python/pyspark/sql/functions.py", line 39, in _ jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col) AttributeError: 'NoneType' object has no attribute '_jvm'
任何帮助将不胜感激!
答案 0 :(得分:15)
在我的情况下,我得到了这个错误,因为我在设置pyspark环境之前尝试执行pyspark代码。
确保在执行依赖于pyspark.sql.functions
的呼叫之前pyspark可用并设置为我解决了问题。
答案 1 :(得分:13)
错误消息说在udf的第27行你调用了一些pyspark sql函数。它与abs()
一致,所以我想在你上面某处调用from pyspark.sql.functions import *
并覆盖python的abs()
函数。
答案 2 :(得分:2)
仅需澄清一下,许多人遇到的问题源于一种不良的编程风格。那就是from blah import *
当你们这样做
from pyspark.sql.functions import *
您覆盖 很多 的python内置函数。我强烈建议导入功能
import pyspark.sql.functions as f
# or
import pyspark.sql.functions as pyf
答案 3 :(得分:1)
当udf
无法处理None
值时,也会发生此异常。
例如,以下代码导致相同的异常:
get_datetime = udf(lambda ts: to_timestamp(ts), DateType())
df = df.withColumn("datetime", get_datetime("ts"))
但是,这不是:
get_datetime = udf(lambda ts: to_timestamp(ts) if ts is not None else None, DateType())
df = df.withColumn("datetime", get_datetime("ts"))
答案 4 :(得分:0)
确保您正在初始化Spark上下文。例如:
spark = SparkSession \
.builder \
.appName("myApp") \
.config("...") \
.getOrCreate()
sqlContext = SQLContext(spark)
productData = sqlContext.read.format("com.mongodb.spark.sql").load()
或如
spark = SparkSession.builder.appName('company').getOrCreate()
sqlContext = SQLContext(spark)
productData = sqlContext.read.format("csv").option("delimiter", ",") \
.option("quote", "\"").option("escape", "\"") \
.option("header", "true").option("inferSchema", "true") \
.load("/path/thecsv.csv")
答案 5 :(得分:0)
我在我的 jupyter 笔记本中发现了这个错误。我添加了以下命令 导入发现火花 findspark.init() sc = pyspark.SparkContext(appName="")
它奏效了。其同样的问题是火花上下文未准备好或已停止。