在Apache Spark中。如何设置worker / executor的环境变量?

时间:2015-03-30 18:59:52

标签: amazon-web-services amazon-s3 apache-spark distributed-computing

我在EMR上的火花程序经常出现这个错误:

Caused by: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
    at sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:421)
    at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
    at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:397)
    at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:148)
    at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
    at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
    at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:573)
    at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:425)
    at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
    at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
    at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:334)
    at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:281)
    at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:942)
    at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2148)
    at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2075)
    at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1093)
    at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:548)
    at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:172)
    at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
    at org.apache.hadoop.fs.s3native.$Proxy8.retrieveMetadata(Unknown Source)
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:414)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem.create(NativeS3FileSystem.java:341)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)

我做了一些研究,发现通过设置环境变量可以在低安全性情况下禁用此身份验证:

com.amazonaws.sdk.disableCertChecking=true

但我只能用spark-submit.sh --conf设置它,它只会影响驱动程序,而大部分错误都在工作者身上。

有没有办法将它们传播给工人?

非常感谢。

4 个答案:

答案 0 :(得分:5)

偶然发现了Spark documentation

中的某些内容

spark.executorEnv.[EnvironmentVariableName]

  

将EnvironmentVariableName指定的环境变量添加到   执行者进程。用户可以指定要设置的多个   多个环境变量。

因此,在您的情况下,我将Spark配置选项spark.executorEnv.com.amazonaws.sdk.disableCertChecking设置为true,看看是否有帮助。

答案 1 :(得分:0)

在现有答案中添加更多内容。

import pyspark


def get_spark_context(app_name):
    # configure
    conf = pyspark.SparkConf()
    conf.set('spark.app.name', app_name)

    # init & return
    sc = pyspark.SparkContext.getOrCreate(conf=conf)

    # Configure your application specific setting

    # Set environment value for the executors
    conf.set(f'spark.executorEnv.SOME_ENVIRONMENT_VALUE', 'I_AM_PRESENT')

    return pyspark.SQLContext(sparkContext=sc)

SOME_ENVIRONMENT_VALUE环境变量将在执行者/工人中可用。

在您的spark应用程序中,您可以像这样访问它们:

import os
some_environment_value = os.environ.get('SOME_ENVIRONMENT_VALUE')

答案 2 :(得分:0)

基于其他答案,这是一个完整的示例(PySpark 2.4.1)。在此示例中,我强制所有工作人员在英特尔MKL内核库中每个内核仅产生一个线程:

import pyspark

conf = pyspark.conf.SparkConf().setAll([
                                   ('spark.executorEnv.OMP_NUM_THREADS', '1'),
                                   ('spark.workerEnv.OMP_NUM_THREADS', '1'),
                                   ('spark.executorEnv.OPENBLAS_NUM_THREADS', '1'),
                                   ('spark.workerEnv.OPENBLAS_NUM_THREADS', '1'),
                                   ('spark.executorEnv.MKL_NUM_THREADS', '1'),
                                   ('spark.workerEnv.MKL_NUM_THREADS', '1'),
                                   ])

spark = pyspark.sql.SparkSession.builder.config(conf=conf).getOrCreate()

# print current PySpark configuration to be sure
print("Current PySpark settings: ", spark.sparkContext._conf.getAll())

答案 3 :(得分:0)

对于spark 2.4,@ Amit Kushwaha的方法不起作用。

我已经测试过:

1。集群模式

spark-submit --conf spark.executorEnv.DEBUG=1 --conf spark.appMasterEnv.DEBUG=1 --conf spark.yarn.appMasterEnv.DEBUG=1 --conf spark.yarn.executorEnv.DEBUG=1 main.py

2。客户端模式

spark-submit --deploy-mode=client --conf spark.executorEnv.DEBUG=1 --conf spark.appMasterEnv.DEBUG=1 --conf spark.yarn.appMasterEnv.DEBUG=1 --conf spark.yarn.executorEnv.DEBUG=1 main.py

以上均不能将环境变量设置为执行程序系统(又名os.environ.get('DEBUG')无法读取)。


唯一的方法是从spark.conf中获取:

提交:

spark-submit --conf DEBUG=1 main.py

获取变量:

DEBUG = spark.conf.get('DEBUG')