我们开始通过向Kafka主题发布消息来整合应用程序中的事件日志数据。虽然我们可以直接从应用程序写入Kafka,但我们选择将其视为一般问题并使用Flume代理。这提供了一些灵活性:如果我们想要从服务器捕获其他内容,我们可以只使用不同的源并发布到不同的Kafka主题。
我们创建了一个Flume代理配置文件来拖尾日志并发布到Kafka主题:
tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1
tier1.sources.source1.type = exec
tier1.sources.source1.command = tail -F /var/log/some_log.log
tier1.sources.source1.channels = channel1
tier1.channels.channel1.type = memory
tier1.channels.channel1.capacity = 10000
tier1.channels.channel1.transactionCapacity = 1000
tier1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
tier1.sinks.sink1.topic = some_log
tier1.sinks.sink1.brokerList = hadoop01:9092,hadoop02.com:9092,hadoop03.com:9092
tier1.sinks.sink1.channel = channel1
tier1.sinks.sink1.batchSize = 20
不幸的是,消息本身并未指定生成它们的主机。如果我们有一个在多个主机上运行的应用程序并且发生错误,我们无法确定哪个主机生成了该消息。
我注意到,如果Flume直接写入HDFS,我们可以use a Flume interceptor写入特定的HDFS位置。虽然我们可能会对Kafka做类似的事情,即为每个服务器创建一个新主题,但这可能会变得难以处理。我们最终会有数千个主题。
当Flume发布到Kafka主题时,Flume会附加/包含原始主机的主机名吗?
答案 0 :(得分:2)
您可以创建一个自定义TCP源,它读取客户端地址并将其添加到标题中。
@Override
public void configure(Context context) {
port = context.getInteger("port");
buffer = context.getInteger("buffer");
try{
serverSocket = new ServerSocket(port);
logger.info("FlumeTCP source initialized");
}catch(Exception e) {
logger.error("FlumeTCP source failed to initialize");
}
}
@Override
public void start() {
try {
clientSocket = serverSocket.accept();
receiveBuffer = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
logger.info("Connection established with client : " + clientSocket.getRemoteSocketAddress());
final ChannelProcessor channel = getChannelProcessor();
final Map<String, String> headers = new HashMap<String, String>();
headers.put("hostname", clientSocket.getRemoteSocketAddress().toString());
String line = "";
List<Event> events = new ArrayList<Event>();
while ((line = receiveBuffer.readLine()) != null) {
Event event = EventBuilder.withBody(
line, Charset.defaultCharset(),headers);
logger.info("Event created");
events.add(event);
if (events.size() == buffer) {
channel.processEventBatch(events);
}
}
} catch (Exception e) {
}
super.start();
}
flume-conf.properties可以配置为:
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# The configuration file needs to define the sources,
# the channels and the sinks.
# Sources, channels and sinks are defined per agent,
# in this case called 'agent'
agent.sources = CustomTcpSource
agent.channels = memoryChannel
agent.sinks = loggerSink
# For each one of the sources, the type is defined
agent.sources.CustomTcpSource.type = com.vishnu.flume.source.CustomFlumeTCPSource
agent.sources.CustomTcpSource.port = 4443
agent.sources.CustomTcpSource.buffer = 1
# The channel can be defined as follows.
agent.sources.CustomTcpSource.channels = memoryChannel
# Each sink's type must be defined
agent.sinks.loggerSink.type = logger
#Specify the channel the sink should use
agent.sinks.loggerSink.channel = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100
我发送了一条测试消息来测试它,看起来像是:
Event: { headers:{hostname=/127.0.0.1:50999} body: 74 65 73 74 20 6D 65 73 73 61 67 65 test message }
我已在github
中上传了该项目答案 1 :(得分:1)
如果您使用的是exec
来源,则没有任何内容可以阻止您运行智能命令,以便将主机名添加到日志文件内容之前。
注意:如果命令使用管道之类的东西,你还需要像这样指定shell:
tier1.sources.source1.type = exec
tier1.sources.source1.shell = /bin/sh -c
tier1.sources.source1.command = tail -F /var/log/auth.log | sed --unbuffered "s/^/$(hostname) /"
消息如下所示:
frb.hi.inet 2015-11-17 08:39:39.432 INFO [...]
... frb.hi.inet
我的主人姓名。