我刚刚开始使用水槽,需要在hdfs接收器中插入一些标头。
我有这个工作,虽然格式错误,我无法控制列。
使用此配置:
a1.sources = r1
a1.sinks = k1
a1.channels = c1
a1.sources.r1.type = syslogudp
a1.sources.r1.host = 0.0.0.0
a1.sources.r1.port = 44444
a1.sources.r1.interceptors = i1 i2
a1.sources.r1.interceptors.i1.type = org.apache.flume.interceptor.HostInterceptor$Builder
a1.sources.r1.interceptors.i1.preserveExisting = false
a1.sources.r1.interceptors.i1.hostHeader = hostname
a1.sources.r1.interceptors.i2.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
a1.sources.r1.interceptors.i2.preserveExisting = false
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:9000/user/vagrant/syslog/%y-%m-%d/
a1.sinks.k1.hdfs.rollInterval = 120
a1.sinks.k1.hdfs.rollCount = 100
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.serializer = header_and_text
a1.sinks.k1.serializer.columns = timestamp hostname
a1.sinks.k1.serializer.format = CSV
a1.sinks.k1.serializer.appendNewline = true
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
除了序列化方面之外,写入HDFS的日志主要是好的:
{timestamp=1415574695138, Severity=6, host=PolkaSpots, Facility=3, hostname=127.0.1.1} hostapd: wlan0-1: STA xx WPA: group key handshake completed (RSN)
如何格式化日志,使其如下所示:
1415574695138 127.0.1.1 hostapd: wlan0-1: STA xx WPA: group key handshake completed (RSN)
首先是时间戳,然后是主机名,然后是syslog msg正文。
答案 0 :(得分:1)
原因是您配置的两个拦截器正在将值写入Flume事件标题,这些标题由HeaderAndBodyTextEventSerializer序列化到正文。后者只是这样做:
public void write(Event e) throws IOException {
out.write((e.getHeaders() + " ").getBytes());
out.write(e.getBody());
if (appendNewline) {
out.write('\n');
}
}
委托给e.getHeaders()只会将地图序列化为JSON字符串。
要解决此问题,我建议您创建自己的序列化程序并重载write()方法,以将输出格式化为制表符分隔值。 在这种情况下,您只需要在以下位置指定类的路径:
a1.sinks.k1.serializer = com.mycompany.MySerlizer
将罐子放入Flume的类路径中。