AWS EMR-ModuleNotFoundError:没有名为'pyarrow'的模块

时间:2019-08-01 18:28:57

标签: apache-spark pyspark amazon-emr pyarrow apache-arrow

我遇到了带有Apache Arrow Spark Integration的问题。

使用带有Spark 2.4.3的AWS EMR

在本地spark单机实例和Cloudera集群上都测试了此问题,并且一切正常。

在spark-env.sh中设置这些

export PYSPARK_PYTHON=python3
export PYSPARK_PYTHON_DRIVER=python3

在spark shell中确认了这一点

spark.version
2.4.3
sc.pythonExec
python3
SC.pythonVer
python3

使用apache箭头集成运行基本的pandas_udf会导致错误

from pyspark.sql.functions import pandas_udf, PandasUDFType

df = spark.createDataFrame(
    [(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
    ("id", "v"))

@pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP)
def subtract_mean(pdf):
    # pdf is a pandas.DataFrame
    v = pdf.v
    return pdf.assign(v=v - v.mean())

df.groupby("id").apply(subtract_mean).show()

aws emr上的错误[在cloudera和本地计算机上没有错误]

ModuleNotFoundError: No module named 'pyarrow'

        at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:172)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
        at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:291)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:283)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:121)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

任何人都知道发生了什么事吗?一些可能的想法...

由于我没有使用anaconda,PYTHONPATH是否会引起问题?

与Spark版本和Arrow版本有关吗?

这是最奇怪的事情,因为我在所有3个平台(本地桌面,cloudera,emr)中使用相同的版本,并且只有EMR无法正常工作...

我登录了所有4个EMR EC2数据节点,并测试了是否可以导入pyarrow,它可以正常运行,但是在尝试与spark一起使用时却无法正常工作

# test

import numpy as np
import pandas as pd
import pyarrow as pa
df = pd.DataFrame({'one': [20, np.nan, 2.5],'two': ['january', 'february', 'march'],'three': [True, False, True]},index=list('abc'))
table = pa.Table.from_pandas(df)

2 个答案:

答案 0 :(得分:1)

在EMR中,默认情况下不解析python3。您必须使其明确。一种方法是在创建集群时传递config.json文件。它在AWS EMR UI的Edit software settings部分中可用。一个示例json文件看起来像这样。

[
  {
    "Classification": "spark-env",
    "Configurations": [
      {
        "Classification": "export",
        "Properties": {
          "PYSPARK_PYTHON": "/usr/bin/python3"
        }
      }
    ]
  },
  {
    "Classification": "yarn-env",
    "Properties": {},
    "Configurations": [
      {
        "Classification": "export",
        "Properties": {
          "PYSPARK_PYTHON": "/usr/bin/python3"
        }
      }
    ]
  }
]

此外,您还需要在所有核心节点(不仅是主节点)中安装pyarrow模块。为此,您可以在AWS中创建集群时使用引导脚本。同样,示例引导脚本可以像这样简单:

#!/bin/bash
sudo python3 -m pip install pyarrow==0.13.0

答案 1 :(得分:0)

您的情况有两种选择:

一种方法是确保每台机器上的python env正确:

  • PYSPARK_PYTHON设置为已安装第三方模块(例如pyarrow)的python解释器。您可以使用type -a python检查您的从属节点上有多少个python。

  • 如果每个节点上的python解释器路径都相同,则可以在PYSPARK_PYTHON中设置spark-env.sh,然后复制到其他所有节点。阅读更多内容:https://spark.apache.org/docs/2.4.0/spark-standalone.html

另一种选择是在spark-submit上添加参数:

希望这些对您有所帮助。