我试图建立一个基本的Kafka-Flume-HDFS管道。 Kafka正在运行,但是当我通过
启动水槽代理时bin/flume-ng agent -n flume1 -c conf -f conf/flume-conf.properties -D flume.root.logger=INFO,console
似乎代理人没有成为我得到的唯一控制台日志:
Info: Sourcing environment configuration script /opt/hadoop/flume/conf/flume-env.sh
Info: Including Hive libraries found via () for Hive access
+ exec /opt/jdk1.8.0_111/bin/java -Xmx20m -D -cp '/opt/hadoop/flume/conf:/opt/hadoop/flume/lib/*:/opt/hadoop/flume/lib/:/lib/*' -Djava.library.path= org.apache.flume.node.Application -n flume1 -f conf/flume-conf.properties flume.root.logger=INFO,console
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/flume/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/flume/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
水槽配置文件:
flume1.sources = kafka-source-1
flume1.channels = hdfs-channel-1
flume1.sinks = hdfs-sink-1
flume1.sources.kafka-source-1.type = org.apache.flume.source.kafka.KafkaSource
flume1.sources.kafka-source-1.zookeeperConnect = localhost:2181
flume1.sources.kafka-source-1.topic = twitter_topic
flume1.sources.kafka-source-1.batchSize = 100
flume1.sources.kafka-source-1.channels = hdfs-channel-1
flume1.channels.hdfs-channel-1.type = memory
flume1.sinks.hdfs-sink-1.channel = hdfs-channel-1
flume1.sinks.hdfs-sink-1.type = hdfs
flume1.sinks.hdfs-sink-1.hdfs.writeFormat = Text
flume1.sinks.hdfs-sink-1.hdfs.fileType = DataStream
flume1.sinks.hdfs-sink-1.hdfs.filePrefix = test-events
flume1.sinks.hdfs-sink-1.hdfs.useLocalTimeStamp = true
flume1.sinks.hdfs-sink-1.hdfs.path = /tmp/kafka/twitter_topic/%y-%m-%d
flume1.sinks.hdfs-sink-1.hdfs.rollCount= 100
flume1.sinks.hdfs-sink-1.hdfs.rollSize= 0
flume1.channels.hdfs-channel-1.capacity = 10000
flume1.channels.hdfs-channel-1.transactionCapacity = 1000
这是flume-conf.properties
中的配置问题还是我遗漏了一些重要内容?
修改
重新启动后,它似乎比以前更好了,Flume实际上正在做一些事情(看起来顺序在启动hdfs,zookeeper,kafka,flume和我的流应用程序时很重要)。 我现在从水槽中得到一个例外
java.lang.NoSuchMethodException: org.apache.hadoop.fs.LocalFileSystem.isFileClosed(org.apache.hadoop.fs.path)
...
答案 0 :(得分:1)
使用完整的HDFS URI编辑hdfs.path
值
flume1.sinks.hdfs-sink-1.hdfs.path = hdfs://namenode_host:port/tmp/kafka/twitter_topic/%y-%m-%d
对于日志:
日志未在控制台上打印,删除-D
和flume.root.logger=INFO,console
之间的空格。
尝试,
bin/flume-ng agent -n flume1 -c conf -f conf/flume-conf.properties -Dflume.root.logger=INFO,console
或从$FLUME_HOME/logs/
目录访问日志。