仅当我尝试使用“结构类型元素数组”时才触发Java EOF错误

时间:2019-10-22 13:40:26

标签: python apache-spark bigdata root

我试图用pyspark处理.root格式的数据。在我的Mac和Linux服务器上。

当我尝试使用带有结构类型元素字段数组的dataframe.show(1)函数时,出现了EOF错误。

但是当我在整数类型字段上尝试相同的操作时,它工作正常。

我尝试了独立模式和本地模式...

火花版本= 2.0.2 hadoop = 2.7

这是数据模式

 |-- Particle: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- SortableObject: struct (nullable = true)
 |    |    |    |-- TObject: struct (nullable = true)
 |    |    |    |    |-- fUniqueID: integer (nullable = true)
 |    |    |    |    |-- fBits: integer (nullable = true)
 |    |    |-- PID: integer (nullable = true)
 |    |    |-- Status: integer (nullable = true)
 |    |    |-- IsPU: integer (nullable = true)
 |    |    |-- M1: integer (nullable = true)
 |    |    |-- M2: integer (nullable = true)
 |    |    |-- D1: integer (nullable = true)
 |    |    |-- D2: integer (nullable = true)
 |    |    |-- Charge: integer (nullable = true)
 |    |    |-- Mass: float (nullable = true)
 |    |    |-- E: float (nullable = true)
 |    |    |-- Px: float (nullable = true)
 |    |    |-- Py: float (nullable = true)
 |    |    |-- Pz: float (nullable = true)
 |    |    |-- P: float (nullable = true)
 |    |    |-- PT: float (nullable = true)
 |    |    |-- Eta: float (nullable = true)
 |    |    |-- Phi: float (nullable = true)
 |    |    |-- Rapidity: float (nullable = true)
 |    |    |-- CtgTheta: float (nullable = true)
 |    |    |-- D0: float (nullable = true)
 |    |    |-- DZ: float (nullable = true)
 |    |    |-- T: float (nullable = true)
 |    |    |-- X: float (nullable = true)
 |    |    |-- Y: float (nullable = true)
 |    |    |-- Z: float (nullable = true)
 |-- Particle_size: integer (nullable = true)
 |-- Muon: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- SortableObject: struct (nullable = true)
 |    |    |    |-- TObject: struct (nullable = true)
 |    |    |    |    |-- fUniqueID: integer (nullable = true)
 |    |    |    |    |-- fBits: integer (nullable = true)
 |    |    |-- PT: float (nullable = true)
 |    |    |-- Eta: float (nullable = true)
 |    |    |-- Phi: float (nullable = true)
 |    |    |-- T: float (nullable = true)
 |    |    |-- Charge: integer (nullable = true)
 |    |    |-- Particle: struct (nullable = true)
 |    |    |    |-- TObject: struct (nullable = true)
 |    |    |    |    |-- fUniqueID: integer (nullable = true)
 |    |    |    |    |-- fBits: integer (nullable = true)
 |    |    |-- IsolationVar: float (nullable = true)
 |    |    |-- IsolationVarRhoCorr: float (nullable = true)
 |    |    |-- SumPtCharged: float (nullable = true)
 |    |    |-- SumPtNeutral: float (nullable = true)
 |    |    |-- SumPtChargedPU: float (nullable = true)
 |    |    |-- SumPt: float (nullable = true)
 |-- Muon_size: integer (nullable = true)

这是错误消息

19/10/22 22:34:46 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:392)
    at org.dianahep.sparkroot.core.SRComposite.read(types.scala:1299)
    at org.dianahep.sparkroot.core.SRComposite.read(types.scala:1161)
    at org.dianahep.sparkroot.core.SRComposite$$anonfun$read$20.apply(types.scala:1302)
    at org.dianahep.sparkroot.core.SRComposite$$anonfun$read$20.apply(types.scala:1302)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.Iterator$class.foreach(Iterator.scala:891)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.dianahep.sparkroot.core.SRComposite.read(types.scala:1302)
    at org.dianahep.sparkroot.core.SRComposite.read(types.scala:1161)
    at org.dianahep.sparkroot.core.SRVector$$anonfun$read$14.apply(types.scala:1082)
    at org.dianahep.sparkroot.core.SRVector$$anonfun$read$14.apply(types.scala:1082)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.Range.foreach(Range.scala:160)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.dianahep.sparkroot.core.SRVector.read(types.scala:1082)
    at org.dianahep.sparkroot.core.SRRoot$$anonfun$read$1.apply(types.scala:106)
    at org.dianahep.sparkroot.core.SRRoot$$anonfun$read$1.apply(types.scala:106)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:74)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.dianahep.sparkroot.core.SRRoot.read(types.scala:106)
    at org.dianahep.sparkroot.core.SRRoot.read(types.scala:97)
    at org.dianahep.sparkroot.core.package$.readSparkRow(ast.scala:62)
    at org.dianahep.sparkroot.package$RootTreeIterator.next(sparkroot.scala:66)
    at org.dianahep.sparkroot.package$RootTreeIterator.next(sparkroot.scala:56)
    at scala.collection.Iterator$$anon$12.next(Iterator.scala:445)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

这是我的数据文件sample_dihiggs_sample.root

from pyspark import SparkContext
from pyspark.sql import SQLContext
import pyspark.sql

sc = SparkContext(appName="Di_higgs_image")
sqlContext = SQLContext(sc)
hdf = sqlContext.read.format("org.dianahep.sparkroot").option("tree","Delphes")
hdf = hdf.load("small_dihiggs_sample.root")
hdf = hdf.select("Particle")
hdf.show(1)

但是当我尝试

hdf.select("Particle_size")
hdf.show(1)

工作正常...

0 个答案:

没有答案