Flume到HDFS将文件拆分为大量文件

时间:2015-02-12 14:12:54

标签: hadoop hdfs flume flume-ng

我正在尝试将700 MB的日志文件从flume传输到HDFS。 我已按如下方式配置flume代理:

...
tier1.channels.memory-channel.type = memory
...
tier1.sinks.hdfs-sink.channel = memory-channel
tier1.sinks.hdfs-sink.type = hdfs
tier1.sinks.hdfs-sink.path = hdfs://***
tier1.sinks.hdfs-sink.fileType = DataStream
tier1.sinks.hdfs-sink.rollSize = 0

来源为spooldir,频道为memory,接收方为hdfs

我还尝试发送1MB文件,并将其分成1000个文件,每个文件大小为1KB。 我注意到的另一件事是转移很慢,1MB大约需要1分钟。 我做错了吗?

1 个答案:

答案 0 :(得分:3)

您也需要停用rolltimeout,并使用以下设置完成:

tier1.sinks.hdfs-sink.hdfs.rollCount = 0
tier1.sinks.hdfs-sink.hdfs.rollInterval = 300

rollcount可防止滚动,rollIntervall此处设置为300秒,将其设置为0将禁用超时。您将不得不选择要转换的机制,否则Flume将仅在关闭时关闭文件。

默认值如下:

hdfs.rollInterval   30  Number of seconds to wait before rolling current file (0 = never roll based on time interval)
hdfs.rollSize   1024    File size to trigger roll, in bytes (0: never roll based on file size)
hdfs.rollCount  10  Number of events written to file before it rolled (0 = never roll based on number of events)