Flume 1.5.0 +从远程Linux服务器读取日志数据

时间:2014-06-25 04:39:52

标签: flume flume-ng

我是Flume的新手。我在一台服务器上安装了FlumeHadoop,其他服务器上也有日志。

通过Flume,我正在尝试阅读日志。这是我的配置文件。

# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory

# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.type = syslogtcp
agent1.sources.avro-source1.bind = 10.209.4.224
agent1.sources.avro-source1.port = 5140

# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://delvmplldsst02:54310/flume/events
agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
agent1.sinks.hdfs-sink1.hdfs.writeFormat = Text
agent1.sinks.hdfs-sink1.hdfs.batchSize = 20
agent1.sinks.hdfs-sink1.hdfs.rollSize = 0
agent1.sinks.hdfs-sink1.hdfs.rollCount = 0

# Finally, now that we've defined all of our components, tell
# agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = hdfs-sink1

#chain the different components together
agent1.sinks.hdfs-sink1.channel = ch1
agent1.sources.avro-source1.channels = ch1

我不确定在这种情况下使用的确切源类型。我在另一台服务器中开始Flume agent,如下所示:

 bin/flume-ng agent --conf-file conf/flume.conf -f /var/log/wtmp -Dflume.root.logger=DEBUG,console -n agent1

以下是上述命令的日志

14/06/25 00:37:17 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
14/06/25 00:37:17 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:conf/flume.conf
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Added sinks: hdfs-sink1 Agent: agent1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [agent1]
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Creating channels
14/06/25 00:37:17 INFO channel.DefaultChannelFactory: Creating instance of channel ch1 type memory
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Created channel ch1
14/06/25 00:37:17 INFO source.DefaultSourceFactory: Creating instance of source avro-source1, type syslogtcp
14/06/25 00:37:17 INFO sink.DefaultSinkFactory: Creating instance of sink: hdfs-sink1, type: hdfs
14/06/25 00:37:17 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Channel ch1 connected to [avro-source1, hdfs-sink1]
14/06/25 00:37:17 INFO node.Application: Starting new configuration:{ sourceRunners:{avro-source1=EventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:avro-source1,state:IDLE} }} sinkRunners:{hdfs-sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@5954864a counterGroup:{ name:null counters:{} } }} channels:{ch1=org.apache.flume.channel.MemoryChannel{name: ch1}} }
14/06/25 00:37:17 INFO node.Application: Starting Channel ch1
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean.
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 started
14/06/25 00:37:17 INFO node.Application: Starting Sink hdfs-sink1
14/06/25 00:37:17 INFO node.Application: Starting Source avro-source1
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: hdfs-sink1: Successfully registered new MBean.
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink1 started
14/06/25 00:37:17 INFO source.SyslogTcpSource: Syslog TCP Source starting...

在这里,ptocess陷入困境,并没有进一步发展。我不知道哪里会出错

有人可以帮助我吗

我没有在我有日志文件的服务器上安装水槽。我也可以安装水槽吗?

使用Flume版本 - 1.5.0 安装了Hadoop版本 - 1.0.4

提前致谢

2 个答案:

答案 0 :(得分:0)

您需要配置其他服务器以将其syslog输出转发到日志记录服务器。该配置完全取决于您正在运行的syslog守护程序。

日志输出显示它正确启动给我。

答案 1 :(得分:0)

问题可能来自syslog。 你的水槽似乎已经开始了,它看起来空闲的原因是它没有收到来自系统日志的任何事件。

确保您的syslog守护程序正在向其发送事件 port = 5140 并为 agent1.sources.avro-source1.bind,你可以通过用0.0.0.0替换ip来绑定到任何源(如果你打算从多个服务器监听)

您可以在/etc/rsyslog.conf

中查看该内容

@hostnameofflume:flumesourceport 在你的情况下它应该是

*.* @10.209.4.224:5140(假设这个ip是您的水槽主持人)