INFO hdfs.HDFSEventSink:调用Writer回调

时间:2015-02-09 06:07:21

标签: hadoop docker flume

我用Google搜索了这个错误。我没有得到任何解决方案。

我有伪分布的hadoop-flume。它是dockerized应用程序。我试图从水槽写入控制台。有用。我正在尝试用hdfs写。它说作家回调失败了。

flume.conf

  a2.sources = r1
  a2.sinks = k1
  a2.channels = c1

 a2.sources.r1.type = netcat
 a2.sources.r1.bind = localhost
 a2.sources.r1.port = 5140

 a2.sinks.k1.type = hdfs
 a2.sinks.k1.hdfs.fileType = DataStream
 a2.sinks.k1.hdfs.writeFormat = Text
 a2.sinks.k1.hdfs.path = hdfs://localhost:8020/user/root/syslog/%y-%m-%d/%H%M/%S
 a2.sinks.k1.hdfs.filePrefix = events
 a2.sinks.k1.hdfs.roundUnit = minute
 a2.sinks.k1.hdfs.useLocalTimeStamp = true

 # Use a channel which buffers events in memory
 a2.channels.c1.type = memory
 a2.channels.c1.capacity = 10000
 a2.channels.c1.transactionCapacity = 100

 # Bind the source and sink to the channel
 a2.sources.r1.channels = c1
 a2.sinks.k1.channel = c1

flume run命令

 /usr/bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name a1 -Dflume.root.logger=INFO,console

所有hadoop服务都在运行。如何解决这个错误?有什么想法吗?

0 个答案:

没有答案