我正在使用Zeppelin Connect远程Spark集群。
远程火花正在使用系统python 2.7。
我想切换到miniconda3,安装lib pyarrow。 我要做的是:
PYSPARK_PYTHON="/usr/local/miniconda3/bin/python"
添加到spark主服务器和从属服务器的spark-env.sh
中。运行代码
%spark.pyspark
import pandas as pd
from pyspark.sql.functions import pandas_udf,PandasUDFType
@pandas_udf(df.schema, PandasUDFType.GROUPED_MAP)
def process_order_items(pdf):
pdf.loc[:, 'total_price'] = pdf['price'] * pdf['count']
d = {'has_discount':'count',
'clearance':'count',
'count': ['count', 'sum'],
'price_guide':'max',
'total_price': 'sum'
}
pdf1 = pdf.groupby('day').agg(d)
pdf1.columns = pdf1.columns.map('_'.join)
d1 = {'has_discount_count':'discount_order_count',
'clearance_count':'clearance_order_count',
'count_count':'order_count',
'count_sum':'sale_count',
'price_guide_max':'price_guide',
'total_price_sum': 'total_price'
}
pdf2 = pdf1.rename(columns=d1)
pdf2.loc[:, 'discount_sale_count'] = pdf.loc[pdf.has_discount>0, 'count'].resample(freq).sum()
pdf2.loc[:, 'clearance_sale_count'] = pdf.loc[pdf.clearance>0, 'count'].resample(freq).sum()
pdf2.loc[:, 'price'] = pdf2.total_price / pdf2.sale_count
pdf2 = pdf2.drop(pdf2[pdf2.order_count == 0].index)
return pdf2
results = df.groupby("store_id", "product_id").apply(process_order_items)
results.select(['store_id', 'price']).show(5)
出现错误:
Py4JJavaError: An error occurred while calling o172.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most recent failure: Lost task 0.3 in stage 6.0 (TID 143, 10.104.33.18, executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 230, in main
process()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 225, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 150, in <lambda>
func = lambda _, it: map(mapper, it)
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 276, in load_stream
import pyarrow as pa
ImportError: No module named pyarrow
10.104.33.18
是spark master,所以我认为PYSPARK_PYTHON
设置不正确。
我登录到主服务器和从属服务器,分别运行pyspark interpreter
,发现import pyarrow
没有抛出异常。
PS:pyarrow
也安装在运行齐柏林飞艇的机器上。
更多信息:
PYSPARK_PYTHON
中设置了spark-env.sh
import pyarrow
与A,B,C / /usr/local/spark/bin/pyspark
很好
import pyarrow
适用于A,B,C自定义python(miniconda3)import pyarrow
在D的默认python上很好(miniconda3,路径与A,B和C不同,但这没关系)所以我完全不明白为什么它不起作用。
答案 0 :(得分:0)
转到Zeppelin配置文件夹( $ ZEPPELIN_HOME / conf )并找到文件 interpreter.json
在这种情况下,查找您要修复(火花)的解释器。
更新以下属性以提供python安装的路径:
- "zeppelin.pyspark.python": "python"
+ "zeppelin.pyspark.python": "/usr/bin/anaconda/bin/python"