我正在考虑使用大数据构建一些东西。理想情况下,我想做的是:
拿一个.csv把它放进水槽,比kafka,执行n ETL并放回Kafka,从kafka放入水槽然后在hdfs。一旦信息在hdfs中,我想执行map reduce工作或一些hive查询,然后绘制我想要的任何信息。如何将.csv文件放入水槽并将其保存到kafka?我有这段代码,但我不确定它是否有效:
myagent.sources = r1
myagent.sinks = k1
myagent.channels = c1
myagent.sources.r1.type = spooldir
myagent.sources.r1.spoolDir = /home/xyz/source
myagent.sources.r1.fileHeader = true
myagent.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
vmagent.channels.c1.type = memory
myagent.channels.c1.capacity = 1000
myagent.channels.c1.transactionCapacity = 100
myagent.sources.r1.channels = c1
myagent.sinks.k1.channel = c1
任何帮助或建议?如果这段代码是正确的,那么如何继续前进?
谢谢大家!!
答案 0 :(得分:0)
您的接收器配置不完整。试试:
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = mytopic
a1.sinks.k1.brokerList = localhost:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1