根据文件大小滚动时,花时间将数据复制到hdfs中

时间:2016-11-10 12:19:30

标签: hadoop flume

我有一个用例,我希望使用flume将远程文件复制到hdfs中。我还希望复制的文件应与HDFS块大小(128MB / 256MB)一致。远程数据的总大小为33GB。

我正在使用avro源码和接收器将远程数据复制到hdfs中。同样从接收端我正在进行文件大小滚动(128,256)。但是为了从远程机器复制文件并将其存储到hdfs(文件大小128/256 MB),水槽平均需要2分钟。

水槽配置: Avro Source(远程机器)

### Agent1 - Spooling Directory Source and File Channel, Avro Sink  ###
# Name the components on this agent
Agent1.sources = spooldir-source  
Agent1.channels = file-channel
Agent1.sinks = avro-sink

# Describe/configure Source
Agent1.sources.spooldir-source.type = spooldir
Agent1.sources.spooldir-source.spoolDir =/home/Benchmarking_Simulation/test


# Describe the sink
Agent1.sinks.avro-sink.type = avro
Agent1.sinks.avro-sink.hostname = xx.xx.xx.xx   #IP Address destination machine
Agent1.sinks.avro-sink.port = 50000

#Use a channel which buffers events in file
Agent1.channels.file-channel.type = file
Agent1.channels.file-channel.checkpointDir = /home/Flume_CheckPoint_Dir/
Agent1.channels.file-channel.dataDirs = /home/Flume_Data_Dir/
Agent1.channels.file-channel.capacity = 10000000
Agent1.channels.file-channel.transactionCapacity=50000

# Bind the source and sink to the channel
Agent1.sources.spooldir-source.channels = file-channel
Agent1.sinks.avro-sink.channel = file-channel

Avro Sink(运行hdfs的机器)

### Agent1 - Avro Source and File Channel, Avro Sink  ###
# Name the components on this agent
Agent1.sources = avro-source1  
Agent1.channels = file-channel1
Agent1.sinks = hdfs-sink1

# Describe/configure Source
Agent1.sources.avro-source1.type = avro
Agent1.sources.avro-source1.bind = xx.xx.xx.xx
Agent1.sources.avro-source1.port = 50000

# Describe the sink
Agent1.sinks.hdfs-sink1.type = hdfs
Agent1.sinks.hdfs-sink1.hdfs.path =/user/Benchmarking_data/multiple_agent_parallel_1
Agent1.sinks.hdfs-sink1.hdfs.rollInterval = 0
Agent1.sinks.hdfs-sink1.hdfs.rollSize = 130023424
Agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
Agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
Agent1.sinks.hdfs-sink1.hdfs.batchSize = 50000
Agent1.sinks.hdfs-sink1.hdfs.txnEventMax = 40000
Agent1.sinks.hdfs-sink1.hdfs.threadsPoolSize=1000
Agent1.sinks.hdfs-sink1.hdfs.appendTimeout = 10000
Agent1.sinks.hdfs-sink1.hdfs.callTimeout = 200000


#Use a channel which buffers events in file
Agent1.channels.file-channel1.type = file
Agent1.channels.file-channel1.checkpointDir = /home/Flume_Check_Point_Dir
Agent1.channels.file-channel1.dataDirs = /home/Flume_Data_Dir
Agent1.channels.file-channel1.capacity = 100000000
Agent1.channels.file-channel1.transactionCapacity=100000


# Bind the source and sink to the channel
Agent1.sources.avro-source1.channels = file-channel1
Agent1.sinks.hdfs-sink1.channel = file-channel1

两台机器之间的网络连接速度为686 Mbps。

有人可以帮我确认一下配置或备用配置是否有问题,这样复制就不会花费太多时间。

1 个答案:

答案 0 :(得分:1)

两个代理都使用文件通道。因此,在写入HDFS之前,数据已经写入磁盘两次。您可以尝试为每个代理使用内存通道,以查看性能是否得到改善。