我可以通过docker用户定义的网桥将NiFi docker容器连接到HBase容器吗?

时间:2017-02-03 21:14:04

标签: networking hbase apache-zookeeper apache-nifi docker-container

我的目标

使用在HDF泊坞窗容器上运行的NiFi将数据存储到在HDP泊坞窗容器上运行的HBase中。

进度

我正在运行两个泊坞容器:NiFi和HBase。我已经配置了NiFi的PutHBaseJSON处理器来将数据写入HBase(puthbasejson_configuration.png)。 以下是我在处理器中更新的配置:

PutHBaseJSON = HBase_1_1_2_ClientService
Table Name = publictrans
Row Identifier Field Name = Vehicle_ID
Row Identifier Encoding Strategy = String
Column Family = trafficpatterns
Batch Size = 25
Complex Field Strategy = Text
Field Encoding Strategy = String

HBase客户端服务

我还在该处理器上配置了NiFi的hbase客户端服务,因此NiFi知道Zookeeper所在的IP地址,要求Zookeeper告诉它HBase Master的位置(hbaseclientservice_configuration.png)。

为HBaseClient配置控制器服务:

ZooKeeper Quorum = 172.25.0.3
ZooKeeper Client Port = 2181
ZooKeeper ZNode Parent = /hbase-unsecure
HBase Client Retries = 1

问题

我面临的问题是NiFi无法与HBase Master建立连接。我收到以下消息:无法调用“@OnEnabled方法,因为... hbase.client.RetriesExhausted Exception ... hbase.MasterNotRunningException ... java.net.ConnectException:Connection refused。” (hbaseMasterNotRunningException Stack Trace)的hbaseclientservice的视觉效果。

我为解决问题所做的配置

在HDF容器中,我使用172.25.0.3更新了/ etc / hosts - > hdp.hortonworks.com。在HDP容器中,我使用172.25.0.2更新了hosts文件 - > hdf.hortonworks.com。因此,两个容器都知道彼此的主机名。

当我构建HDF和HDP容器时,我端口转发了NiFi,Zookeeper和HBase所需的端口。我检查了HBase上的所有端口是否暴露在HDP容器上,图像显示了HDP正在监听的所有端口,包括HBase的端口(ports_hdp_listening_on。png)。这是HBase所需的所有端口的图像,我在Ambari(hbase_ports_needed.png)中过滤了端口关键字。

16000和16020端口都看起来很可疑,因为所有其他端口都有模式:::端口,但这两个端口前面有一些措辞。所以,我检查了是否可以使用telnet 172.25.0.3 16000从HDF连接到HDP并收到输出:

尝试172.25.0.3 ...

连接到172.25.0.3。

逃脱角色是'^]'。

所以我能够连接到HDP容器。

hbaseMasterNotRunningException Stack Trace

2017-01-25 22:23:03,342 ERROR [StandardProcessScheduler Thread-7] o.a.n.c.s.StandardControllerServiceNode HBase_1_1_2_ClientService[id=d3eaf393-0159-1000-ffff-ffffa95f1940] Failed to invoke @OnEnabled method due to org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions:
Wed Jan 25 22:23:03 UTC 2017, RpcRetryingCaller{globalStartTime=1485382983338, pause=100, retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
2017-01-25 22:23:03,348 ERROR [StandardProcessScheduler Thread-7] o.a.n.c.s.StandardControllerServiceNode 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions:
Wed Jan 25 22:23:03 UTC 2017, RpcRetryingCaller{globalStartTime=1485382983338, pause=100, retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147) ~[na:na]
        at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917) ~[na:na]
        at org.apache.hadoop.hbase.client.HBaseAdmin.listTableNames(HBaseAdmin.java:413) ~[na:na]
        at org.apache.hadoop.hbase.client.HBaseAdmin.listTableNames(HBaseAdmin.java:397) ~[na:na]
        at org.apache.nifi.hbase.HBase_1_1_2_ClientService.onEnabled(HBase_1_1_2_ClientService.java:187) ~[na:na]
        at sun.reflect.GeneratedMethodAccessor568.invoke(Unknown Source) ~[na:na]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_111]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_111]
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137) ~[na:na]
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125) ~[na:na]
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70) ~[na:na]
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47) ~[na:na]
        at org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:345) ~[na:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_111]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1533) ~[na:na]
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1553) ~[na:na]
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1704) ~[na:na]
        at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) ~[na:na]
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124) ~[na:na]
        ... 19 common frames omitted
Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223) ~[na:na]
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) ~[na:na]
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:50918) ~[na:na]
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1564) ~[na:na]
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1502) ~[na:na]
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1524) ~[na:na]
        ... 23 common frames omitted
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_111]
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_111]
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[na:na]
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ~[na:na]
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) ~[na:na]
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:424) ~[na:na]
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:748) ~[na:na]
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920) ~[na:na]
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:889) ~[na:na]
        at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1222) ~[na:na]
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) ~[na:na]
        ... 28 common frames omitted
2017-01-25 22:23:03,348 ERROR [StandardProcessScheduler Thread-7] o.a.n.c.s.StandardControllerServiceNode Failed to invoke @OnEnabled method of HBase_1_1_2_ClientService[id=d3eaf393-0159-1000-ffff-ffffa95f1940] due to org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1, exceptions:
Wed Jan 25 22:23:03 UTC 2017, RpcRetryingCaller{globalStartTime=1485382983338, pause=100, retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused

我目前仍在处理问题

有没有人设置NiFi HDF docker容器将数据存储到HBase HDP docker容器中?

0 个答案:

没有答案