无法在flume-ng中创建类型为HDFS的接收器

时间:2012-12-03 07:38:32

标签: hdfs flume

我有一个将日志写入HDFS的方法 我在一个节点中制作了一个代理 但它没有运行 有我的配置。


#example2.conf:单节点Flume配置

#命名此代理商上的组件
agent1.sources = source1
agent1.sinks = sink1
agent1.channels = channel1

#描述/配置source1
agent1.sources.source1.type = avro
agent1.sources.source1.bind = localhost
agent1.sources.source1.port = 41414

#使用缓冲内存中事件的通道
agent1.channels.channel1.type =记忆
agent1.channels.channel1.capacity = 10000
agent1.channels.channel1.transactionCapacity = 100

#描述sink1
agent1.sinks.sink1.type = HDFS
agent1.sinks.sink1.hdfs.path = hdfs://dbkorando.kaist.ac.kr:9000 / flume

#绑定信号源并接收信道
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1


和我命令

flume-ng agent -n agent1 -c conf -C /home/hyahn/hadoop-0.20.2/hadoop-0.20.2-core.jar -f conf/example2.conf -Dflume.root.logger=INFO,console

结果是


信息:包括通过(/home/hyahn/hadoop-0.20.2/bin/hadoop)找到的Hadoop库,用于HDFS访问
+ exec /usr/java/jdk1.7.0_02/bin/java -Xmx20m -Dflume.root.logger = INFO,console -cp' / etc / flume-ng / conf:/ usr / lib / flume-ng /lib/*:/home/hyahn/hadoop-0.20.2/hadoop-0.20.2-core.jar' -Djava.library.path =:/ home / hyahn / hadoop-0.20.2 / bin /../ lib / native / Linux-amd64-64 org.apache.flume.node.Application -n agent1 -f conf / example2 .conf文件
2012-11-27 15:33:17,250(主要)[INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)]启动生命周期主管1 2012-11-27 15:33:17,253(主要)[INFO - org.apache.flume.node.FlumeNode.start(FlumeNode.java:54)] Flume节点启动 - agent1
2012-11-27 15:33:17,257(lifecycleSupervisor-1-1)[INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider.start(AbstractFileConfigurationProvider.java:67)]配置提供程序开始
2012-11-27 15:33:17,257(lifecycleSupervisor-1-0)[INFO - org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.start(DefaultLogicalNodeManager.java:203)]节点管理器开始
2012-11-27 15:33:17,258(lifecycleSupervisor-1-0)[INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)]启动生命周期监督9 2012-11-27 15:33:17,258(conf-file-poller-0)[INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider $ FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:195)]重新加载配置文件:conf /example2.conf
2012-11-27 15:33:17,266(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:988)]处理:sink1
2012-11-27 15:33:17,266(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:988)]处理:sink1
2012-11-27 15:33:17,267(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:988)]处理:sink1
2012-11-27 15:33:17,268(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:902)]添加了汇:sink1代理:agent1
2012-11-27 15:33:17,290(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:122)]验证后的水槽配置包含代理的配置:[agent1]
2012-11-27 15:33:17,290(conf-file-poller-0)[INFO - org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurationProvider.java:249)]创建频道
2012-11-27 15:33:17,354(conf-file-poller-0)[INFO - org.apache.flume.instrumentation.MonitoredCounterGroup。(MonitoredCounterGroup.java:68)]监控计数器组的类型:CHANNEL,name: channel1,已成功注册。
2012-11-27 15:33:17,355(conf-file-poller-0)[INFO - org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurationProvider.java:273)]创建了频道channel1
2012-11-27 15:33:17,368(conf-file-poller-0)[INFO - org.apache.flume.instrumentation.MonitoredCounterGroup。(MonitoredCounterGroup.java:68)]监控计数器组的类型:SOURCE,name: source1,已成功注册。
2012-11-27 15:33:17,378(conf-file-poller-0)[INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:70)]创建sink:sink1的实例,输入: HDFS


如上所述,发生了水槽产生部分的水槽停止的问题。 有什么问题?

1 个答案:

答案 0 :(得分:1)

您需要打开另一个窗口并在端口41414发送avro命令:

bin/flume-ng avro-client --conf conf -H localhost -p 41414 -F /home/hadoop1/aaa.txt -Dflume.root.logger=DEBUG,console

这里我在aaa.txt目录

上有一个名为/home/hadoop1/的文件

您的水槽将读取此文件并发送至hdfs。