data channel lock error while configuring flume with multiple channels

data channel lock error while configuring flume with multiple channels



I have tried to fan out the flow from one source to two channels.Also I specified different dataDirs and checkpointDirs properties for each channel as in the channel lock error while configuring flume's multiple sources using FILE channels question.I have used a multiplexing channel selector. I have get the following error.


18/08/23 16:21:37 **ERROR file.FileChannel: Failed to start the file channel** [channel=fileChannel1_2]
java.io.IOException: Cannot lock /root/.flume/file-channel/data. The directory is already locked. [channel=fileChannel1_2]
at org.apache.flume.channel.file.Log.lock(Log.java:1169)
at org.apache.flume.channel.file.Log.<init>(Log.java:336)
at org.apache.flume.channel.file.Log.<init>(Log.java:76)
at org.apache.flume.channel.file.Log$Builder.build(Log.java:276)
at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:281)
at unAndReset(FutureTask.java:308) .....



My configuration file as follws.


agent1.sinks=hdfs-sink1_1 hdfs-sink1_2

agent1.sources=source1_1

agent1.channels=fileChannel1_1 fileChannel1_2

agent1.channels.fileChannel1_1.type=file

agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDir=/mnt/alpha_data/
agent1.channels.fileChannel1_1.checkpointOnClose=true

agent1.channels.fileChannel1_1.dataOnClose=true

agent1.sources.source1_1.type=spooldir

agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED

agent1.sources.source1_1.basenameHeader = true

agent1.sinks.hdfs-sink1_1.type=hdfs

agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %basename

agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA

agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false

agent1.channels.fileChannel1_2.type=file

agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDir=/mnt/beta_data/
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true

agent1.sinks.hdfs-sink1_2.type=hdfs

agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %basename

agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ

agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false

agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2

agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1

agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2

agent1.sources.source1_1.selector.type=multiplexing
agent1.sources.source1_1.selector.header=basenameHeader
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2



can someone give any solution for that.




2 Answers
2



Try to set a channel for default property in multiplexing selector



agent1.sources.source1_1.selector.default=fileChannel1_1





Thank you. When selector is default, it replicate all the events to both channel. What I need is multiplex the data flow into two channels. According to the Apache Flume documentation Flume multiplex the events according to the event header. To do this I should add file names to the event header. How can I do that? Should I use Flume Interceptor? Are there any in built filename interceptors in Flume?
– Kalani Tharaka
Aug 25 at 15:37





fileHeader and fileHeaderKey properties should add the file name to the event header
– Saravanan Elumalai
Aug 25 at 15:53





I used fileHeaderKey anf fileHeader properties as follows. agent1.sources.source1_1.fileHeader=true agent1.sources.source1_1.fileHeaderKey=file Then what should be the agent1.sources.source1_1.selector.header=???
– Kalani Tharaka
Aug 25 at 16:49






agent1.sources.source1_1.selector.header=file But the header will contain the absolute file path So mapping should specify the file path agent1.sources.source1_1.selector.mapping.ABSOLUTE_FILE_PATH=fileChannel1_1
– Saravanan Elumalai
Aug 25 at 17:01





I want to do mapping using file name not using the file path. Is it possible to do mapping with file name by using above agent1.sources.source1_1.selector.mapping.ABSOLUTE_FILE_PATH=fileChannel1_1 command???
– Kalani Tharaka
Aug 25 at 17:15



Data channel lock error was corrected. But still could't do the multiplexing. Code as follows.


agent1.sinks=hdfs-sink1_1 hdfs-sink1_2 hdfs-sink1_3

agent1.sources=source1_1

agent1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3

agent1.channels.fileChannel1_1.type=file

agent1.channels.fileChannel1_1.capacity=200000

agent1.channels.fileChannel1_1.transactionCapacity=1000

agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001

agent1.channels.fileChannel1_1.dataDirs=/home/Flume/alpha_data

agent1.channels.fileChannel1_1.checkpointOnClose=true

agent1.channels.fileChannel1_1.dataOnClose=true



agent1.sources.source1_1.type=spooldir

agent1.sources.source1_1.spoolDir=/home/ABC/

agent1.sources.source1_1.recursiveDirectorySearch=true

agent1.sources.source1_1.fileSuffix=.COMPLETED

agent1.sources.source1_1.basenameHeader = true

agent1.sources.source1_1.basenameHeaderKey = basename



agent1.sinks.hdfs-sink1_1.type=hdfs

agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %basename

agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA

agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false


agent1.channels.fileChannel1_2.type=file

agent1.channels.fileChannel1_2.capacity=200000

agent1.channels.fileChannel1_2.transactionCapacity=1000

agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001

agent1.channels.fileChannel1_2.dataDirs=/home/Flume/beta_data

agent1.channels.fileChannel1_2.checkpointOnClose=true

agent1.channels.fileChannel1_2.dataOnClose=true



agent1.sinks.hdfs-sink1_2.type=hdfs

agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %basename

agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ

agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false

agent1.channels.fileChannel1_3.type=file

agent1.channels.fileChannel1_3.capacity=200000

agent1.channels.fileChannel1_3.transactionCapacity=10

agent1.channels.fileChannel1_3.checkpointDir=/home/Flume/gamma/001

agent1.channels.fileChannel1_3.dataDirs=/home/Flume/gamma_data

agent1.channels.fileChannel1_3.checkpointOnClose=true

agent1.channels.fileChannel1_3.dataOnClose=true


agent1.sinks.hdfs-sink1_3.type=hdfs

agent1.sinks.hdfs-sink1_3.hdfs.filePrefix = %basename

agent1.sinks.hdfs-sink1_3.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/KT

agent1.sinks.hdfs-sink1_3.hdfs.batchSize=1000

agent1.sinks.hdfs-sink1_3.hdfs.rollSize=268435456

agent1.sinks.hdfs-sink1_3.hdfs.rollInterval=0

agent1.sinks.hdfs-sink1_3.hdfs.rollCount=50000000

agent1.sinks.hdfs-sink1_3.hdfs.fileType=DataStream

agent1.sinks.hdfs-sink1_3.hdfs.writeFormat=Text

agent1.sinks.hdfs-sink1_3.hdfs.useLocalTimeStamp=false



agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3

agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1

agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2

agent1.sinks.hdfs-sink1_3.channel=fileChannel1_3



agent1.sources.source1_1.selector.type=replicating

agent1.sources.source1_1.selector.header=basename

agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1

agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2

agent1.sources.source1_1.selector.default=fileChannel1_3






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Edmonton

Crossroads (UK TV series)