我想用log4j登录maprDB一个spark作业。我编写了一个自定义appender,这里是我的log4j.properties:
log4j.rootLogger = INFO,stdout
log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.stdout.Target = System.out的 log4j.appender.stdout.layout = org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern =%d {yyyy-MM-dd HH:mm:ss} %-5p%c {1}:%L - %m%n
log4j.appender.MapRDB = com.datalake.maprdblogger.Appender
log4j.logger.testest = WARN,MapRDB
放上src / main / resources目录
这是我的主要方法:
object App {
val log: Logger = org.apache.log4j.LogManager.getLogger(getClass.getName)
def main(args: Array[String]): Unit = {
// custom appender
LogHelper.fillLoggerContext("dev", "test", "test", "testest", "")
log.error("bad record.")
}
}
当我在没有任何配置的情况下运行spark-submit时,没有任何反应。这就像我的log4j.properties不在这里。
如果我手动部署我的log4j.properties文件并添加选项:
- conf spark.driver.extraJavaOptions = -Dlog4j.configuration = file:/PATH_TO_FILE/log4j.properties
- conf spark.executor.extraJavaOptions = -Dlog4j.configuration = file:/PATH_TO_FILE/log4j.properties
效果很好。如果没有选择权,它为什么不起作用?
答案 0 :(得分:0)
“ spark.driver.extraJavaOptions”:
默认值为:(无)
A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. Note that it is illegal to set maximum heap size (-Xmx) settings with this option. Maximum heap size settings can be set with spark.driver.memory in the cluster mode and through the --driver-memory command-line option in the client mode.
注意:在客户端模式下,不得直接在应用程序中通过SparkConf设置此配置,因为此时驱动程序JVM已经启动。相反,请通过--driver-java-options命令行选项或在默认属性文件中进行设置。