我有一个我编写的模块,其中包含作用于PySpark DataFrames的函数。他们对DataFrame中的列进行转换,然后返回一个新的DataFrame。以下是代码的示例,缩写为仅包含其中一个函数:
from pyspark.sql import functions as F
from pyspark.sql import types as t
import pandas as pd
import numpy as np
metadta=pd.DataFrame(pd.read_csv("metadata.csv")) # this contains metadata on my dataset
def str2num(text):
if type(text)==None or text=='' or text=='NULL' or text=='null':
return 0
elif len(text)==1:
return ord(text)
else:
newnum=''
for lettr in text:
newnum=newnum+str(ord(lettr))
return int(newnum)
str2numUDF = F.udf(lambda s: str2num(s), t.IntegerType())
def letConvNum(df): # df is a PySpark DataFrame
#Get a list of columns that I want to transform, using the metadata Pandas DataFrame
chng_cols=metadta[(metadta.comments=='letter conversion to num')].col_name.tolist()
for curcol in chng_cols:
df=df.withColumn(curcol, str2numUDF(df[curcol]))
return df
这是我的模块,称之为mymodule.py。如果我启动PySpark shell,我会执行以下操作:
import mymodule as mm
myf=sqlContext.sql("select * from tablename lim 10")
我检查了myf(PySpark DataFrame),没关系。我通过尝试使用str2num函数来检查我是否实际导入了mymodule:
mm.str2num('a')
97
所以它实际上是导入模块。如果我试试这个:
df2=mm.letConvNum(df)
这样做是为了检查它是否有效:
df2.show()
它尝试执行操作,但随后崩溃:
16/03/10 16:10:44 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 365)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File "test2.py", line 16, in <module>
str2numUDF=F.udf(lambda s: str2num(s), t.IntegerType())
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1460, in udf
return UserDefinedFunction(f, returnType)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1422, in __init__
self._judf = self._create_judf(name)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1430, in _create_judf
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command, self)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2317, in _prepare_for_python_RDD
[x._jbroadcast for x in sc._pickled_broadcast_vars],
AttributeError: 'NoneType' object has no attribute '_pickled_broadcast_vars'
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:397)
at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:362)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/03/10 16:10:44 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/2.3.4.0-3485/spark/python/pyspark/sql/dataframe.py", line 256, in show
print(self._jdf.showString(n, truncate))
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/hdp/2.3.4.0-3485/spark/python/pyspark/sql/utils.py", line 36, in deco
return f(*a, **kw)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o7299.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 365, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File "test2.py", line 16, in <module>
str2numUDF=F.udf(lambda s: str2num(s), t.IntegerType())
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1460, in udf
return UserDefinedFunction(f, returnType)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1422, in __init__
self._judf = self._create_judf(name)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1430, in _create_judf
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command, self)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2317, in _prepare_for_python_RDD
[x._jbroadcast for x in sc._pickled_broadcast_vars],
AttributeError: 'NoneType' object has no attribute '_pickled_broadcast_vars'
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:397)
at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:362)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215)
at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207)
at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1903)
at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1384)
at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1314)
at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1377)
at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File "test2.py", line 16, in <module>
str2numUDF=F.udf(lambda s: str2num(s), t.IntegerType())
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1460, in udf
return UserDefinedFunction(f, returnType)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1422, in __init__
self._judf = self._create_judf(name)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1430, in _create_judf
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command, self)
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2317, in _prepare_for_python_RDD
[x._jbroadcast for x in sc._pickled_broadcast_vars],
AttributeError: 'NoneType' object has no attribute '_pickled_broadcast_vars'
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:397)
at org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:362)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
作为检查,我打开了一个干净的shell而不是导入模块,我只是在交互式shell中定义了str2num函数和UDF。然后我输入了最后一个函数的内容,并进行了相同的最终检查:
df2.show()
这一次,我回到了我期待的转换后的DataFrame。
为什么在以交互方式输入功能时它才起作用,而在从模块读入功能时却没有?我知道它正在读取模块,因为常规函数str2num可以工作。
答案 0 :(得分:4)
我遇到了同样的错误并且跟踪了堆栈跟踪。
就我而言,我正在构建一个Egg文件,然后通过--py-files
选项将其传递给spark。
关于错误,我认为可以归结为这样一个事实:当你调用F.udf(str2num, t.IntegerType())
时,在Spark运行之前创建了一个UserDefinedFunction
实例,所以它对一些{{1}有一个空引用,称之为SparkContext
。运行UDF时,会引用sc
,这会在输出中抛出sc._pickled_broadcast_vars
。
我的工作是避免在Spark运行之前创建UDF(因此有一个活跃的AttributeError
。在你的情况下,你可以改变你的定义
SparkContext
注意:我实际上没有测试过上面的代码,但我自己的代码中的更改类似,一切正常。
另外,感兴趣的读者请参阅Spark code for UserDefinedFunction
答案 1 :(得分:1)
如果你只在其他功能中使用UDF,你可以这样做。
from pyspark.sql.functions import udf
class Udf(object):
def __init__(s, func, spark_type):
s.func, s.spark_type = func, spark_type
def __call__(s, *args):
return udf(s.func, s.spark_type)(*args)
myfunc_udf = Udf(myfunc, StringType())
def processing():
df_new = df.select(myfunc_udf('somefield'))
答案 2 :(得分:1)
我认为更干净的解决方案是使用udf装饰器定义udf函数:
# Default PAGE object:
page = PAGE
page {
10 = CONTENT
10 {
table = tt_content
select.orderby = sorting
}
includeCSS {
screen = fileadmin/pricingdesign/resources/private/assets/css/signin.css
screen.title = display
screen.media = screen
bootstrap = fileadmin/pricingdesign/resources/private/assets/css/bootstrap.min.css
}
bodyTag = <body class="text-center">
}
plugin.tx_felogin_pi1.templateFile = fileadmin/pricingdesign/resources/private/layouts/logintest.html
plugin.tx_felogin_pi1.showPermaLogin = 2
plugin.tx_felogin_pi1.showForgotPasswordLink = 2
使用此解决方案,udf不会引用任何其他函数,因此不会向您抛出任何错误。
对于一些较旧的spark版本,装饰器不支持键入的udf,您可能需要按如下方式定义自定义装饰器:
from pyspark.sql.functions as F
@F.udf
def str2numUDF(text):
if type(text)==None or text=='' or text=='NULL' or text=='null':
return 0
elif len(text)==1:
return ord(text)
else:
newnum=''
for lettr in text:
newnum=newnum+str(ord(lettr))
return int(newnum)
答案 3 :(得分:0)
你是什么火花版本的btw?
将函数转换为UDF,如下所示:
str2numUDF = F.udf(str2num, t.IntegerType())
这里不需要lambda函数。
答案 4 :(得分:0)
我一直在坚持这个问题坚持20个小时。谢谢你的解决方案!
这是我的变体,以防有人对我如何解决同样的问题感兴趣。虽然它主要来自上面的代码/回应。
这里的目的是简单地转换字符串列以显示它们的长度,但是你当然可以做任何事情(我在主应用程序中进行数据类型检查和错误跟踪)。
我正在使用udf的内容要复杂得多,不过这就是我测试我的udf现在正在运行的方法。
假设您的数据帧都是StringType() 在我的情况下,我有4个字符串列
解决方案:
我创建了一个名为myfunctions的单独.py文件
里面的
0.77611111111112
然后在我的主课堂里面 将新的.py文件添加到sparkContext
from pyspark.sql import functions as F
from pyspark.sql.types import IntegerType
import logging
def str2num(text):
if type(text) == None or text == '' or text == 'NULL' or text == 'null':
return 0
else:
return len(text)
def letConvNum(df, columns):
str2numUDF = F.udf(str2num, IntegerType())
logging.info(columns)
index = 0
for curcol in columns:
df = df.withColumn(curcol, str2numUDF(df[curcol]))
index += 1
return df
输入:
#my understanding is that this insures your function is added to a spark across all nodes
sc.addPyFile("./myfunctions.py")
#dynamically create headers based on config -simplified for example
schemaString = "YearMonth,IMEI,IMSI,MSISDN"
fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split(",")]
schema = StructType(fields)
df = sqlContext.read.format('com.databricks.spark.csv').options(header='false', inferschema='false', delimiter='|').load('/app/teacosy/invictus/kenya/SAF_QUALCOMM_IMEI_20170321.txt', schema=schema)
#read and write file to get parquet. please note this was to optimize MASSIVE files 50-200g
df.write.parquet("data.parquet", mode='overwrite')
dataframe = sqlContext.read.parquet("data.parquet")
df2 = mf.letConvNum(dataframe, schemaString.split(","))
df2.show()
输出:
+---------+---------------+---------------+------------+
|YearMonth| IMEI| IMSI| MSISDN|
+---------+---------------+---------------+------------+
| 201609|869859025975610|639021005869699|254724884336|
| 201609|359521062182040|639021025339132|254721224577|
| 201609|353121070662770|639021025339132|254721224577|
| 201609|868096015837410|639021025339132|254721224577|
| 201609|866204020015610|639021025339132|254721224577|
| 201609|356051060479107|639028040455896|254710404131|
| 201609|353071062803703|639027641207269|254725555262|
| 201609|356899067316490|639027841002602|254711955201|
| 201609|860357020164930|639028550063234|254715570856|
| 201609|862245026673900|639028940332785|254728412070|
| 201609|352441075290910|639029340152407|254714582871|
| 201609|862074027499277|639029340152407|254714582871|
| 201609|357036073532528|639028500408346|254715408346|
| 201609|356546060475230|639021011628783|254722841516|
| 201609|356546060475220|639021011628783|254722841516|
| 201609|866838023727117|639028840277749|254718492024|
| 201609|354210053950950|639029440054836|254729308302|
| 201609|866912020393040|639029870328080|254725528182|
| 201609|357921070054540|639028340694869|254710255083|
| 201609|357977056264767|639027141561199|254721977494|
我希望这有助于任何人在努力看到他们的pyspark应用程序冻结或挂起......如此令人沮丧的omg ......