我在使用Log4J2(2.1)在storm(1.0.1)中登录到jdbc appender时遇到问题。我一直在关注Microsoft HDInsight doc Apache Storm develop java topology。其他因素包括使用Microsoft EventHubs(1.0.2)库进行EventHubsSpout配置,这往往会导致SLF4j出现一些Maven“Dependency Hell”问题。
调试本地群集时,我在jar中包含了storm-core 1.0.1(然后在部署拓扑时将其标记在pom.xml中)。我有一个JDBC连接的连接工厂类,然后在log4j2.xml配置文件中声明JDBC appender。本地集群倾向于正确输出(我们已经成功地尝试了控制台和JDBC appender)。
当拓扑提交到HDInsight群集时,Storm-UI中不会抛出任何错误,但是没有日志语句输出到数据库。我试过了:
在环境方面,我们已确认群集可以与SQL Server通信。我们还确认了jdbc驱动程序在群集上工作以与SQL Server通信。
关于为什么它成功记录为本地群集的任何输入,但是在提交到HDInsight群集时记录“失败”(考虑到设置几乎与Microsoft doc相同,除了使用JDBC而不是控制台) ?
下面的代码段(提醒,storm-core - >范围:提供已注释掉本地群集):
的pom.xml:
<dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.1</version>
<!-- keep storm out of the jar-with-dependencies -->
<!--<scope>provided</scope>-->
<exclusions>
<exclusion>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
<exclusion>
<groupId>logback-classic</groupId>
<artifactId>ch.qos.logback</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.microsoft</groupId>
<artifactId>eventhubs</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.apache.log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
</exclusion>
<exclusion>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.1</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-dbcp2</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>6.2.2.jre8</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>false</filtering>
<includes>
<include>log4j2.xml</include>
</includes>
</resource>
</resources>
</build>
的ConnectionFactory:
public class ConnectionFactory {
private interface Singleton {
ConnectionFactory INSTANCE = new ConnectionFactory();
}
private final BasicDataSource dataSource;
private static Properties dbProperties;
private static String URL;
private static String DRIVER;
private static String USERNAME;
private static String PASSWORD;
private ConnectionFactory() {
this.dataSource = new BasicDataSource();
}
public static void SetConnectionProperties(Properties properties) {
try {
//This property is used by logger
System.setProperty("hostName", InetAddress.getLocalHost().getHostName());
//Get environment properties
dbProperties = properties;
URL = dbProperties.getProperty("jdbc.url");
DRIVER = dbProperties.getProperty("jdbc.driverClassName");
USERNAME = dbProperties.getProperty("jdbc.username");
PASSWORD = dbProperties.getProperty("jdbc.password");
//Set connection properties
Singleton.INSTANCE.dataSource.setUrl(URL);
Singleton.INSTANCE.dataSource.setDriverClassName(DRIVER);
Singleton.INSTANCE.dataSource.setUsername(USERNAME);
Singleton.INSTANCE.dataSource.setPassword(PASSWORD);
} catch (IOException ex){
}
}
public static Connection getConnection() throws SQLException {
return Singleton.INSTANCE.dataSource.getConnection();
}
}
log4j2.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration monitorInterval="60">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
<JDBC name="databaseAppender" tableName="NotificationLog">
<ConnectionFactory
class="ConnectionFactory" method="getConnection" />
<Column name="LocalTimestamp" pattern="%d{yyyy-MM-dd HH:mm:ss.SSS}" />
<Column name="UtcTimestamp" pattern="%d{yyyy-MM-dd HH:mm:ss.SSS}{GMT}" />
<Column name="Level" pattern="%p" />
<Column name="Message" pattern="%m" />
<Column name="MachineName" pattern="${hostName}" />
<Column name="Exception" pattern="%throwable{full}" />
</JDBC>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="STDOUT"/>
<AppenderRef ref="databaseAppender"/>
</Root>
</Loggers>
</configuration>
bolt.java
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
public class Bolt extends BaseRichBolt {
private OutputCollector collector;
private static final Logger LOG = LogManager.getLogger(Bolt.class);
@SuppressWarnings("rawtypes")
@Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.collector = collector;
}
@Override
public void execute(Tuple input) {
LOG.debug("Test debug");
LOG.info("Test Info");
LOG.error("Test Error");
collector.ack(input);
}
...
}
更新(2/20/2018):将跟踪日志记录语句添加到jar的Main方法中,我们能够成功记录到数据库。这意味着从螺栓中记录问题似乎是孤立的。这可能是序列化的问题吗?如果是,为什么这会在本地计算机上运行jar,并在部署到Storm Cluster时开始出现问题?
更新(2/23/2018):虽然我们没有这个问题的解决方案,但是我们实现了一个使用JDBC驱动程序直接使用SQL预处理语句的工作(例如,制作了我们自己的日志记录解决方案)。
我认为我最近一个问题的答案与Storm Cluster上的序列化组件被送入/从Zookeepers输入的事实有关,因为某些原因Log4J不能很好地播放。但是,使用我们自己的“Logging”解决方案,我们在提交拓扑之前注册工厂进行序列化,它似乎有效(参见代码片段)。
....
Config conf = new Config();
conf.registerSerialization(LoggerFactory.class);
StormSubmitter.submitTopology(topologyName, conf, builder.createTopology());
所有这些只是猜想,我真的希望更好地理解拓扑,螺栓和注入库之间的相互作用,然后才能为这个特定问题提供一个可接受的答案。