我最近开始在EMR集群上运行的pyspark
个作业中遇到很多错误。错误是
java.lang.IllegalArgumentException
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543)
at org.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:58)
at org.apache.arrow.vector.ipc.ArrowStreamReader.readSchema(ArrowStreamReader.java:132)
at org.apache.arrow.vector.ipc.ArrowReader.initialize(ArrowReader.java:181)
at org.apache.arrow.vector.ipc.ArrowReader.ensureInitialized(ArrowReader.java:172)
at org.apache.arrow.vector.ipc.ArrowReader.getVectorSchemaRoot(ArrowReader.java:65)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:162)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at org.apache.spark.sql.execution.python.ArrowEvalPythonExec$$anon$2.<init>(ArrowEvalPythonExec.scala:98)
at org.apache.spark.sql.execution.python.ArrowEvalPythonExec.evaluate(ArrowEvalPythonExec.scala:96)
at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127)...
它们似乎都发生在熊猫系列的apply
函数中。我发现的唯一变化是pyarrow
已在星期六(05/10/2019)更新。测试似乎适用于0.14.1
所以我的问题是,是否有人知道这是新更新的pyarrow中的错误,还是有一些重大更改会导致pandasUDF将来难以使用?
答案 0 :(得分:9)
这不是错误。我们在0.15.0中进行了一项重要的协议更改,使pyarrow的默认行为与Java中的较旧版本的Arrow不兼容-您的Spark环境似乎正在使用较旧的版本。
您的选择是
ARROW_PRE_0_15_IPC_FORMAT=1
希望Spark社区能够很快用Java升级到0.15.0,所以这个问题就消失了。
中有讨论答案 1 :(得分:0)
在 Spark 中尝试以下附录:
spark-submit --deploy-mode cluster --conf spark.yarn.appExecutorEnv.ARROW_PRE_0_15_IPC_FORMAT=1 --conf spark.yarn.appMasterEnv.ARROW_PRE_0_15_IPC_FORMAT=1 --conf spark.executorEnv.ARROW_PRE_0_15_IPC_FORMAT=1