当我使用水槽将文件实时发布到HDFS时,连接被拒绝

时间:2019-07-02 16:54:01

标签: hdfs hadoop2 flume

我是flume的初学者,当我尝试编写一个模板来研究如何使用实时处理将文件发送到HDFS的flume时,出现了有关拒绝连接的错误。 我想在模板中做什么: ->使用水槽收集配置单元创建的日志,并将其发布到HDFS

这是我的job_conf

#name sources,sinks,and channels
a2.sources=r2
a2.sinks=k2
a2.channels=c2

#sources conf,specify the log file i want to watch
a2.sources.r2.type=exec
a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log
a2.sources.r2.shell=/bin/bash -c

#sinks conf
a2.sinks.k2.type = hdfs
#connect porperties
a2.sinks.k2.hdfs.path = hdfs://hadoop102/flume/%Y%m%d/%H
a2.sinks.k2.hdfs.filePrefix = hive_log-
a2.sinks.k2.hdfs.useLocalTimeStamp = true
a2.sinks.k2.hdfs.fileType = DataStream
a2.sinks.k2.hdfs.round = true
a2.sinks.k2.hdfs.roundValue = 1
a2.sinks.k2.hdfs.roundUnit = hour
a2.sinks.k2.hdfs.rollInterval = 600
a2.sinks.k2.hdfs.rollSize = 134217700
a2.sinks.k2.hdfs.batchSize = 1000
a2.sinks.k2.hdfs.rollCount = 0
a2.sinks.k2.hdfs.minBlockReplicas = 1

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

下面的flume.log错误信息

 03 July 2019 00:21:41,002 INFO  [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:231)  - Creating hdfs://hadoop102/flume/20190703/00/logs-.1562084496871.tmp
03 July 2019 00:21:41,132 WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:443)  - HDFS IO error
java.net.ConnectException: Call From hadoop102/192.168.1.102 to hadoop102:8020 failed on connection exception: java.net.ConnectException: 拒绝连接(connection denied); For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
        at org.apache.hadoop.ipc.Client.call(Client.java:1479)
        at org.apache.hadoop.ipc.Client.call(Client.java:1412)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy12.create(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy13.create(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1652)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)
        at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
        at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:776)
        at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:81)
        at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:108)
        at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:242)
        at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:232)
        at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:668)
        at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
        at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:665)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: 拒绝连接(connection denied)
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
        at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
        at org.apache.hadoop.ipc.Client.call(Client.java:1451)
        ... 34 more

它看起来是如此有线,如果我进行更改,我会在HDFS中获得预期的文件

     a2.sinks.k2.hdfs.path = hdfs://hadoop102/flume/%Y%m%d/%H
to
      a2.sinks.k2.hdfs.path = /flume/%Y%m%d/%H

我猜我不指定hdfs:// hadoop102时,flume会将hadoop配置加载到hadoop集群的信息中。

顺便说一句,我已经在水槽官方网站上看到了这样的信息

hdfs.path   –   HDFS directory path (eg hdfs://namenode/flume/webdata/)

以及水槽网站给出的示例

a1.channels = c1
a1.sinks = k1
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/%S
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute

还没有指定hadoop集群ip

1 个答案:

答案 0 :(得分:0)

如果在其上运行水槽代理的节点是Hadoop生态系统的一部分,例如将该节点配置为hadoop客户端计算机,则可以提供直接文件夹路径而无需提供方案和名称节点信息,如果那样节点不是生态系统的一部分,那么您需要提供完整路径,例如:hdfs://name-node:port/flume/...