配置带有多个通道的水槽时出现数据通道锁定错误

时间:2018-08-23 15:03:02

标签: flume-ng

我试图将流从一个源分散到两个通道。我也为每个通道指定了不同的dataDirs和checkpointDirs属性,就像在channel lock error while configuring flume's multiple sources using FILE channels问题中一样。我使用了多路复用通道选择器。我遇到以下错误。

18/08/23 16:21:37 **ERROR file.FileChannel: Failed to start the file channel** [channel=fileChannel1_2]
java.io.IOException: Cannot lock /root/.flume/file-channel/data. The directory is already locked. [channel=fileChannel1_2]
    at org.apache.flume.channel.file.Log.lock(Log.java:1169)
    at org.apache.flume.channel.file.Log.<init>(Log.java:336)
    at org.apache.flume.channel.file.Log.<init>(Log.java:76)
    at org.apache.flume.channel.file.Log$Builder.build(Log.java:276)
    at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:281)
    at unAndReset(FutureTask.java:308) .....

我的配置文件如下。

agent1.sinks=hdfs-sink1_1 hdfs-sink1_2 

agent1.sources=source1_1

agent1.channels=fileChannel1_1 fileChannel1_2

agent1.channels.fileChannel1_1.type=file

agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDir=/mnt/alpha_data/
agent1.channels.fileChannel1_1.checkpointOnClose=true

agent1.channels.fileChannel1_1.dataOnClose=true

agent1.sources.source1_1.type=spooldir

agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED

agent1.sources.source1_1.basenameHeader = true

agent1.sinks.hdfs-sink1_1.type=hdfs

agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}

agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA

agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false

agent1.channels.fileChannel1_2.type=file

agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDir=/mnt/beta_data/
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true

agent1.sinks.hdfs-sink1_2.type=hdfs

agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}

agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ

agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false

agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2

agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1

agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2

agent1.sources.source1_1.selector.type=multiplexing
agent1.sources.source1_1.selector.header=basenameHeader
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2

有人可以为此提供任何解决方案吗?

2 个答案:

答案 0 :(得分:0)

尝试在多路复用选择器中设置默认属性的通道

  

agent1.sources.source1_1.selector.default = fileChannel1_1

答案 1 :(得分:0)

数据通道锁定错误已得到纠正。但是仍然无法进行多路复用。代码如下。

agent1.sinks=hdfs-sink1_1 hdfs-sink1_2 hdfs-sink1_3

agent1.sources=source1_1

agent1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3

agent1.channels.fileChannel1_1.type=file

agent1.channels.fileChannel1_1.capacity=200000

agent1.channels.fileChannel1_1.transactionCapacity=1000

agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001

agent1.channels.fileChannel1_1.dataDirs=/home/Flume/alpha_data

agent1.channels.fileChannel1_1.checkpointOnClose=true

agent1.channels.fileChannel1_1.dataOnClose=true



agent1.sources.source1_1.type=spooldir

agent1.sources.source1_1.spoolDir=/home/ABC/

agent1.sources.source1_1.recursiveDirectorySearch=true

agent1.sources.source1_1.fileSuffix=.COMPLETED

agent1.sources.source1_1.basenameHeader = true

agent1.sources.source1_1.basenameHeaderKey = basename



agent1.sinks.hdfs-sink1_1.type=hdfs

agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}

agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA

agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false


agent1.channels.fileChannel1_2.type=file

agent1.channels.fileChannel1_2.capacity=200000

agent1.channels.fileChannel1_2.transactionCapacity=1000

agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001

agent1.channels.fileChannel1_2.dataDirs=/home/Flume/beta_data

agent1.channels.fileChannel1_2.checkpointOnClose=true

agent1.channels.fileChannel1_2.dataOnClose=true



agent1.sinks.hdfs-sink1_2.type=hdfs

agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}

agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ

agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false

agent1.channels.fileChannel1_3.type=file

agent1.channels.fileChannel1_3.capacity=200000

agent1.channels.fileChannel1_3.transactionCapacity=10

agent1.channels.fileChannel1_3.checkpointDir=/home/Flume/gamma/001

agent1.channels.fileChannel1_3.dataDirs=/home/Flume/gamma_data

agent1.channels.fileChannel1_3.checkpointOnClose=true

agent1.channels.fileChannel1_3.dataOnClose=true


agent1.sinks.hdfs-sink1_3.type=hdfs

agent1.sinks.hdfs-sink1_3.hdfs.filePrefix = %{basename}

agent1.sinks.hdfs-sink1_3.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/KT

agent1.sinks.hdfs-sink1_3.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_3.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_3.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_3.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_3.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_3.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_3.hdfs.useLocalTimeStamp=false



agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3

agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1

agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2

agent1.sinks.hdfs-sink1_3.channel=fileChannel1_3



agent1.sources.source1_1.selector.type=replicating

agent1.sources.source1_1.selector.header=basename

agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1

agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2

agent1.sources.source1_1.selector.default=fileChannel1_3