自定义日志记录配置由spark默认日志记录配置覆盖

时间:2018-04-07 17:16:06

标签: scala apache-spark logging log4j hadoop2

我正在尝试在边缘节点上的自定义位置编写spark日志。 但我的log4j.properties文件被spark2-client / conf / log4j.properties中的默认集群属性文件覆盖

请帮我解决这个问题。

以下是详细信息:

我使用以下版本: Spark版本2.1.1.2.6.2.25-1 Scala版本2.11.8

以下是我的火花提交命令

spark-submit \
--files file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties \
--class com.abc.datalake.ingestion.DataCleansingValidation \
--master yarn --deploy-mode cluster \
--conf spark.executor.memory=12G \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
--conf spark.driver.memory=2g \
--conf salience=no \
--conf spark.executor.instances=10 \
--conf spark.executor.cores=3 \
--conf spark.rule_src_path='adl://abcdadatalakedev.azuredatalakestore.net/Intake/CDCTest/Meta_RV' \
--conf spark.num_of_partition=200 \
--conf 'spark.eventLog.dir=file:///home/abcdadevadmin/spark_jar/logs/' \
adl://abcdadatalakedev.azuredatalakestore.net/Intake/jar/DataValidationFrameWorkBaselineCDC.jar cat_1 

以下是我的属性文件

# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

# Set everything to be logged to the console
log4j.rootCategory=DEBUG, console, FILE
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# User log
log4j.logger.DataValidationFramework=DEBUG,ROLLINGFILE
log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ROLLINGFILE.File=file:///home/abcdadevadmin/spark_jar/logs/log.out
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.appender.ROLLINGFILE.MaxBackupIndex=10
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
log4j.appender.ROLLINGFILE.DatePattern='.'yyyy-MM-dd-HH-mm

以下是来自火花作业的日志

在下面的日志中,-Dlog4j.configuration属性设置了两次。 一个选择我的自定义属性文件,另一个是默认群集属性

SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.25-1/spark2/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.25-1/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 14291046      4 -r-x------   1 yarn     hadoop       3635 Apr  6 05:34 ./__spark_conf__/log4j.properties
 14291064      8 -r-x------   1 yarn     hadoop       4221 Apr  6 05:34 ./__spark_conf__/task-log4j.properties
    exec /bin/bash -c "LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx12288m 
 '-Dhdp.version=' 
 '-Detwlogger.component=sparkexecutor' 
 '-DlogFilter.filename=SparkLogFilters.xml' 
 '-Dlog4j.configuration=file:/home/abcdadevadmin/spark_jar/log4j/log4j.properties' 
 '-DpatternGroup.filename=SparkPatternGroups.xml' 
 '-Dlog4jspark.root.logger=INFO,console,RFA,ETW,Anonymizer' 
 '-Dlog4jspark.log.dir=/var/log/sparkapp/\${user.name}' 
 '-Dlog4jspark.log.file=sparkexecutor.log' 
 '-Dlog4j.configuration=file:/usr/hdp/current/spark2-client/conf/log4j.properties' 
 '-Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl' -Djava.io.tmpdir=$PWD/tmp 
 '-Dspark.driver.port=34369' 
 '-Dspark.history.ui.port=18080' 
 '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.16.124.102:34369 --executor-id 9 --hostname wn8-da0001.zu4isz2uwtcuhdu3c5h0tllmhh.cx.internal.cloudapp.net --cores 3 --app-id application_1522782395512_1033 --user-class-path file:$PWD/__app__.jar 1>/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010/stdout 2>/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010/stderr"

我也试过使用以下选项,但没有运气!!!

--conf 'spark.executor.extraJavaOptions=Dlog4j.configuration=file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties'
--driver-java-options '-Dlog4j.configuration=file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties' 

1 个答案:

答案 0 :(得分:0)

如果您使用的是集群部署模式,则必须指向驱动程序和执行程序中的本地路径,这是基本目录。

试试这个:

--conf 'spark.executor.extraJavaOptions=-Dlog4j.configuration=file:./log4j.properties'
--conf 'spark.driver.extraJavaOptions=-Dlog4j.configuration=file:./log4j.properties' 

请勿忘记用以下内容广播您的文件:

--files file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties