pySpark - Spark DF到Pandas DF - java.lang.IllegalArgumentException

时间:2018-03-12 12:19:37

标签: python pandas apache-spark dataframe pyspark

预先提供一些基本信息:

  • Python:2.7
  • 操作系统:Mac 10.13.2 High Sierra
  • Anaconda-Navigator:版本1.7.0

我的基本工作流程如下:

  1. 使用pySpark从HDFS进行一些初始数据提取和转换 和Spark数据帧
  2. 使用像Seaborn这样的库将Spark数据帧转换为Panda数据帧。在这里,我使用函数.toPandas(),但它引发了一个非常难以理解的错误。
  3. 作为一个例子,这是我测试的一个非常小的Spark Dataframe,它抛出了与我的大型Dataframe相同的错误:

    sampleList = [('john', 10000.0),('sally', 3.0),('dude', 10.0)]
    
    sparkTestDF = sqlContext.createDataFrame(sampleList, schema=['name','denominator'])
    
    sparkTestDF.toPandas()
    

    这最终会引发以下错误。关于(a)这意味着什么以及(b)如何解决它/解决它的任何想法?

        Py4JJavaErrorTraceback (most recent call last)
    <ipython-input-15-b151034bf9ad> in <module>()
          1 sampleList = [('john', 10000.0),('sally', 3.0),('dude', 10.0)]
          2 sparkTestDF = sqlContext.createDataFrame(sampleList, schema=['name','denominator'])
    ----> 3 sparkTestDF.toPandas()
    
    /anaconda2/lib/python2.7/site-packages/pyspark/sql/dataframe.pyc in toPandas(self)
       1964                 raise RuntimeError("%s\n%s" % (_exception_message(e), msg))
       1965         else:
    -> 1966             pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
       1967 
       1968             dtype = {}
    
    /anaconda2/lib/python2.7/site-packages/pyspark/sql/dataframe.pyc in collect(self)
        464         """
        465         with SCCallSiteSync(self._sc) as css:
    --> 466             port = self._jdf.collectToPython()
        467         return list(_load_from_socket(port, BatchedSerializer(PickleSerializer())))
        468 
    
    /anaconda2/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args)
       1158         answer = self.gateway_client.send_command(command)
       1159         return_value = get_return_value(
    -> 1160             answer, self.gateway_client, self.target_id, self.name)
       1161 
       1162         for temp_arg in temp_args:
    
    /anaconda2/lib/python2.7/site-packages/pyspark/sql/utils.pyc in deco(*a, **kw)
         61     def deco(*a, **kw):
         62         try:
    ---> 63             return f(*a, **kw)
         64         except py4j.protocol.Py4JJavaError as e:
         65             s = e.java_exception.toString()
    
    /anaconda2/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
        318                 raise Py4JJavaError(
        319                     "An error occurred while calling {0}{1}{2}.\n".
    --> 320                     format(target_id, ".", name), value)
        321             else:
        322                 raise Py4JError(
    
    Py4JJavaError: An error occurred while calling o155.collectToPython.
    : java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
        at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:3195)
        at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3192)
        at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3192)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:3225)
        at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3192)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:214)
        at java.base/java.lang.Thread.run(Thread.java:844)
    

1 个答案:

答案 0 :(得分:0)

我遇到了完全相同的问题,并通过将JAVA_HOME环境变量设置为指向Java SDK 8来解决了该错误。

at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3192)

然后会变成Java错误。这是一个已知问题(see this related Stack Overflow link)。

您可以在bashrc中设置JAVA_HOME,将conf文件设置为spark,甚至在笔记本中也可以,例如对于Ubuntu:

%env JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/

对于Mac,这可能与以下内容类似:

%env JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/