从本地机器访问天蓝色vm火花码头

时间:2018-03-16 02:28:04

标签: azure hadoop apache-spark docker ifconfig

Spark docker安装在azure vm(centos 7.2)中,我想从本地机器(Windows)访问hdfs。

我在Windows中运行curl -i -v -L http://52.234.XXX.XXX:50070/webhdfs/v1/user/helloworld.txt?op=OPEN,例外是

$ curl -i -v -L http://52.234.XXX.XXX:50070/webhdfs/v1/user/helloworld.txt?op=OP                                                                              EN
* timeout on name lookup is not supported
*   Trying 52.234.XXX.XXX...
* TCP_NODELAY set
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*                                                                               Connected to 52.234.XXX.XXX (52.234.XXX.XXX) port 50070 (#0)
> GET /webhdfs/v1/user/helloworld.txt?op=OPEN HTTP/1.1
> Host: 52.234.XXX.XXX:50070
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 307 TEMPORARY_REDIRECT
< Cache-Control: no-cache
< Expires: Fri, 16 Mar 2018 02:16:37 GMT
< Date: Fri, 16 Mar 2018 02:16:37 GMT
< Pragma: no-cache
< Expires: Fri, 16 Mar 2018 02:16:37 GMT
< Date: Fri, 16 Mar 2018 02:16:37 GMT
< Pragma: no-cache
< Location: http://sandbox:50075/webhdfs/v1/user/helloworld.txt?op=OPEN&namenode                                                                              rpcaddress=sandbox:9000&offset=0
< Content-Type: application/octet-stream
< Content-Length: 0
< Server: Jetty(6.1.26)
<
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* Connection #0 to host 52.234.227.186 left intact
* Issue another request to this URL: 'http://sandbox:50075/webhdfs/v1/user/hello                                                                              world.txt?op=OPEN&namenoderpcaddress=sandbox:9000&offset=0'
* timeout on name lookup is not supported
*   Trying 10.122.118.83...
* TCP_NODELAY set
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0HT                                                                              TP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 16 Mar 2018 02:16:37 GMT
Date: Fri, 16 Mar 2018 02:16:37 GMT
Pragma: no-cache
Expires: Fri, 16 Mar 2018 02:16:37 GMT
Date: Fri, 16 Mar 2018 02:16:37 GMT
Pragma: no-cache
Location: http://sandbox:50075/webhdfs/v1/user/helloworld.txt?op=OPEN&namenoderp                                                                              caddress=sandbox:9000&offset=0
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26)

* connect to 10.122.118.83 port 50075 failed: Timed out
* Failed to connect to sandbox port 50075: Timed out
* Closing connection 1
curl: (7) Failed to connect to sandbox port 50075: Timed out

centos public ip address是:52.234.XXX.XXX。

是否由unknow ip '10 .122.118.83'引起?是datanode ip地址吗?我已经在azure vm网络设置中打开这些端口。

我用
启动docker docker run -it -p 8088:8088 -p 8042:8042 -p 9000:9000 -p 8087:8087 -p 50070:50070 -p 50010:50010 -p 50075:50075 -p 50475:50475 --name sparkdocker -h sandbox --network=host sequenceiq/spark:1.6.0 bash hadoop的fs.defaultFS是'hdfs:// sandbox:9000' 同一资源组中的centos和其他azure机器访问hdfs(上传,下载,读取文件)没有问题。

spark docker ifconfig:

docker0   Link encap:Ethernet  HWaddr 02:42:D9:2A:5D:BB
      inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
      UP BROADCAST MULTICAST  MTU:1500  Metric:1
      RX packets:53 errors:0 dropped:0 overruns:0 frame:0
      TX packets:57 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:3889 (3.7 KiB)  TX bytes:6674 (6.5 KiB)

eth0      Link encap:Ethernet  HWaddr 00:0D:3A:14:B5:C1
          inet addr:10.0.0.7  Bcast:10.0.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:60543 errors:0 dropped:0 overruns:0 frame:0
          TX packets:68081 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:22930277 (21.8 MiB)  TX bytes:11271703 (10.7 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:14779 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14779 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:4032619 (3.8 MiB)  TX bytes:4032619 (3.8 MiB)

centos vm ifconfig:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
    inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
    ether 02:42:d9:2a:5d:bb  txqueuelen 0  (Ethernet)
    RX packets 53  bytes 3889 (3.7 KiB)
    RX errors 0  dropped 0  overruns 0  frame 0
    TX packets 57  bytes 6674 (6.5 KiB)
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.7  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0d:3a:14:b5:c1  txqueuelen 1000  (Ethernet)
        RX packets 60750  bytes 23017881 (21.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 68320  bytes 11310643 (10.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 14857  bytes 4042781 (3.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14857  bytes 4042781 (3.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

1 个答案:

答案 0 :(得分:0)

如果您希望将远程主机名公开给外部网络,则使用TRUE的本地IP不能sandbox。由于数据节点和名称节点之间的各种网络调用回到远程网络中的外部客户端,因此在整个请求中需要外部可解析的IP或DNS记录。

通过在端口8088上查看您的群集,YARN服务也是如此

我相信它是core-site.xml中的一个设置,这需要像10.0.0.7

hdfs://external.namenode.fqdn:port

并且在hdfs-site.xml中将两者都设置为true - 因为在云环境中,您的主机名通常是静态的,而IP可以更改。此外,在Azure网络中,节点知道如何通信,但在群集外部,内部DNS名称无法解析

fs.default.name

如果您在Azure中运行,我可能会建议您使用HD洞察而不是单个数据节点沙箱

无论如何,您不需要远程Spark实例。你可以在当地发展。将该Spark应用程序部署到远程YARN(或Spark Standalone)集群。您也不需要HDFS ... Spark可以从Azure blob商店读取,并在独立的调度程序中运行

另一个建议:永远不要打开一个不安全的Hadoop集群的所有端口,并将公共IP发布到网上。请在您的端部使用SSH转发以安全地连接到Azure网络

相关问题