Flume 1.6.0假脱机目录源,标头上有时间戳

时间:2018-01-05 09:18:58

标签: hadoop flume flume-ng

我正在尝试创建一个新的水槽代理,如source spooldir,并将它们放入HDFS。这是我的配置文件:

agent.sources = file
agent.channels = channel
agent.sinks = hdfsSink

# SOURCES CONFIGURATION
agent.sources.file.type = spooldir
agent.sources.file.channels = channel
agent.sources.file.spoolDir = /path/to/json_files

# SINKS CONFIGURATION
agent.sinks.hdfsSink.type = hdfs
agent.sinks.hdfsSink.hdfs.path = /HADOOP/PATH/%Y/%m/%d/%H/

agent.sinks.hdfsSink.hdfs.filePrefix = common
agent.sinks.hdfsSink.hdfs.fileSuffix = .json
agent.sinks.hdfsSink.hdfs.rollInterval = 300
agent.sinks.hdfsSink.hdfs.rollSize = 5242880
agent.sinks.hdfsSink.hdfs.rollCount = 0
agent.sinks.hdfsSink.hdfs.maxOpenFiles = 2
agent.sinks.hdfsSink.hdfs.fileType = DataStream
agent.sinks.hdfsSink.hdfs.callTimeout = 100000
agent.sinks.hdfsSink.hdfs.batchSize = 1000
agent.sinks.hdfsSink.channel = channel

# CHANNELS CONFIGURATION
agent.channels.channel.type = memory
agent.channels.channel.capacity = 10000
agent.channels.channel.transactionCapacity = 1000

我收到的错误描述了 Expected timestamp in the Flume event headers, but it was null 。我正在阅读的文件包含JSON结构,其中有一个名为timestamp的字段。

有没有办法在标题中添加此时间戳?

2 个答案:

答案 0 :(得分:0)

根据我之前的评论,现在我正在分享我为spooling header enable json file执行的所有步骤,并使用hadoop hdfs将其放到flume群集中,在json上创建外部文件{1}}文件,后来对其执行了DML query -

已创建flume-spool.conf

//Flume Configuration Starts
erum.sources =source-1
erum.channels =file-channel-1
erum.sinks =hdfs-sink-1

erum.sources.source-1.channels =file-channel-1
erum.sinks.hdfs-sink-1.channel =file-channel-1

//Define a file channel called fileChannel on erum
erum.channels.file-channel-1.type =file 

erum.channels.file-channel-1.capacity =2000000
erum.channels.file-channel-1.transactionCapacity =100000

//Define a source for erum
erum.sources.source-1.type =spooldir
erum.sources.source-1.bind =localhost
erum.sources.source-1.port =44444
erum.sources.source-1.inputCharset =UTF-8
erum.sources.source-1.bufferMaxLineLength =100

//Spooldir in my case is /home/arif/practice/flume_sink
erum.sources.source-1.spoolDir =/home/arif/practice/flume_sink/
erum.sources.source-1.fileHeader =true
erum.sources.source-1.fileHeaderKey=file
erum.sources.source-1.fileSuffix =.COMPLETED

//Sink is flume_import under hdfs
erum.sinks.hdfs-sink-1.pathManager =DEFAULT
erum.sinks.hdfs-sink-1.type =hdfs

erum.sinks.hdfs-sink-1.hdfs.filePrefix =common
erum.sinks.hdfs-sink-1.hdfs.fileSuffix =.json
erum.sinks.hdfs-sink-1.hdfs.writeFormat =Text
erum.sinks.hdfs-sink-1.hdfs.fileType =DataStream
erum.sinks.hdfs-sink-1.hdfs.path =hdfs://localhost:9000/user/arif/flume_sink/products/

erum.sinks.hdfs-sink-1.hdfs.batchSize =1000
erum.sinks.hdfs-sink-1.hdfs.rollSize =2684354560
erum.sinks.hdfs-sink-1.hdfs.rollInterval =5
erum.sinks.hdfs-sink-1.hdfs.rollCount =5000

现在我们正在使用代理运行水槽 - erum

bin/flume-ng agent -n erum -c conf -f conf/flume-spool.conf -Dflume.root.logger=DEBUG,console

products.json erum.sources.source-1.spoolDir配置的指定目录中复制flume文件。

products.json文件中的内容如下所示 -

{"productid":"5968dd23fc13ae04d9000001","product_name":"sildenafilcitrate","mfgdate":"20160719031109","supplier":"WisozkInc","quantity":261,"unit_cost":"$10.47"}
{"productid":"5968dd23fc13ae04d9000002","product_name":"MountainJuniperusashei","mfgdate":"20161003021009","supplier":"Keebler-Hilpert","quantity":292,"unit_cost":"$8.74"}
{"productid":"5968dd23fc13ae04d9000003","product_name":"DextromathorphanHBr","mfgdate":"20161101041113","supplier":"Schmitt-Weissnat","quantity":211,"unit_cost":"$20.53"}
{"productid":"5968dd23fc13ae04d9000004","product_name":"MeophanHBr","mfgdate":"20161101061113","supplier":"Schmitt-Weissnat","quantity":198,"unit_cost":"$18.73"}

从以下网址下载hive-serdes-sources-1.0.6.jar -

https://www.dropbox.com/s/lsjgk2zaqz8uli9/hive-serdes-sources-1.0.6.jar?dl=0

使用flume-spool将json文件假脱机到hdfs集群后,我们将启动hive服务器,登录到hive shell,然后执行以下操作 -

hive> add jar /home/arif/applications/hadoop/apache-hive-2.1.1-bin/lib/hive-serdes-sources-1.0.6.jar;
hive> create external table products (productid string, product_name string, mfgdate string, supplier string, quantity int, unit_cost string) 
    > row format serde 'com.cloudera.hive.serde.JSONSerDe' location '/user/arif/flume_sink/products/';
OK
Time taken: 0.211 seconds
hive> select * from products;
OK
5968dd23fc13ae04d9000001    sildenafilcitrate   20160719031109  WisozkInc   261 $10.47
5968dd23fc13ae04d9000002    MountainJuniperusashei  20161003021009  Keebler-Hilpert 292 $8.74
5968dd23fc13ae04d9000003    DextromathorphanHBr 20161101041113  Schmitt-Weissnat    211 $20.53
5968dd23fc13ae04d9000004    MeophanHBr  20161101061113  Schmitt-Weissnat    198 $18.73
Time taken: 0.291 seconds, Fetched: 4 row(s)

我完成了这些整个步骤,没有任何错误,希望这对你有所帮助,谢谢。

答案 1 :(得分:0)

如本文所述: http://shzhangji.com/blog/2017/08/05/how-to-extract-event-time-in-apache-flume/

所需的更改是包含一个拦截器和序列化器:

# SOURCES CONFIGURATION
agent.sources.file.type = spooldir
agent.sources.file.channels = channel
agent.sources.file.spoolDir = /path/to/json_files
agent.sources.file.interceptors = i1
agent.sources.file.interceptors.i1.type = regex_extractor
agent.sources.file.interceptors.i1.regex = <regex_for_timestamp>
agent.sources.file.interceptors.i1.serializers = s1
agent.sources.file.interceptors.i1.serializers.s1.type = org.apache.flume.interceptor.RegexExtractorInterceptorMillisSerializer
agent.sources.file.interceptors.i1.serializers.s1.name = timestamp
agent.sources.file.interceptors.i1.serializers.s1.pattern = <pattern_that_matches_your_regex>

感谢您指出除了链接我还需要包含一个合适的片段:)