Hawq

时间:2016-06-30 10:29:04

标签: hawq

我有5个节点Hortonworks集群(版本 - 2.4.2),其中我安装了Hawq 2.0.0。

这5个节点是: 边缘 master(名称节点) node1(数据节点1) node2(数据节点2) node3(数据节点3)

我按照此链接在HDP中安装Hawq - http://hdb.docs.pivotal.io/hdb/install/install-ambari.html

Hawq coomponents安装在这些节点中:

Hawq master - node1 Hawq standy master - node2

Hawq段 - node1,node2,node3

在安装时,Hawq master,Hawq standy master,hawq segment已成功安装,但Hawq安装程序在Ambari中运行的基本Hawq测试失败了:

以下是安装程序执行的操作

2016-06-30 00:24:22,513 - --- Check state of HAWQ cluster ---
2016-06-30 00:24:22,513 - Executing hawq status check...
2016-06-30 00:24:22,514 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"source /usr/local/hawq/greenplum_path.sh && hawq state -d /data/hawq/master \" "
2016-06-30 00:24:23,343 - Output of command:
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--HAWQ instance status summary
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:------------------------------------------------------
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Master instance                                = Active
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Master standby                                 = node2.localdomain
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Standby master state                           = Standby host passive
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Total segment instance count from config file  = 3
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:------------------------------------------------------ 
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Segment Status                                    
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:------------------------------------------------------ 
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Total segments count from catalog      = 1
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Total segment valid (at master)        = 0
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Total segment failures (at master)     = 3
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Total number of postmaster.pid files missing   = 0
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--   Total number of postmaster.pid files found     = 3


2016-06-30 00:24:23,344 - --- Check if HAWQ can write and query from a table ---
2016-06-30 00:24:23,344 - Dropping ambari_hawq_test table if exists
2016-06-30 00:24:23,344 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"export PGPORT=5432 && source /usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\"DROP  TABLE IF EXISTS ambari_hawq_test;\\\" \" "
2016-06-30 00:24:23,436 - Output:
DROP TABLE

2016-06-30 00:24:23,436 - Creating table ambari_hawq_test
2016-06-30 00:24:23,436 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"export PGPORT=5432 && source /usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\"CREATE  TABLE ambari_hawq_test (col1 int) DISTRIBUTED RANDOMLY;\\\" \" "
2016-06-30 00:24:23,693 - Output:
CREATE TABLE

2016-06-30 00:24:23,693 - Inserting data to table ambari_hawq_test
2016-06-30 00:24:23,693 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"export PGPORT=5432 && source /usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\"INSERT INTO  ambari_hawq_test SELECT * FROM generate_series(1,10);\\\" \" 

---上面我们可以看到,drop和Create表已经执行但插入操作没有成功。

因此,我在Hawq主节点上手动执行了insert命令,即node1

这些是手动执行的步骤:

[root@node1 ~]# su - gpadmin
[gpadmin@node1 ~]$ psql
psql (8.4.20, server 8.2.15)
WARNING: psql version 8.4, server version 8.2.
         Some psql features might not work.
Type "help" for help.

gpadmin=#
gpadmin=# \c gpadmin
psql (8.4.20, server 8.2.15)
WARNING: psql version 8.4, server version 8.2.
         Some psql features might not work.
You are now connected to database "gpadmin".
gpadmin=# create table test(name varchar);
gpadmin=# insert into test values('vikash');

- 上面的插入操作在很长一段时间后抛出错误

  

错误:无法从资源管理器资源获取资源   由于没有可用的集群(pquery.c:804)

,请求超时

此外,节点1中的hawq段登录为

[root@node1 ambari-agent]# tail -f /data/hawq/segment/pg_log/hawq-2016-06-30_045853.csv
2016-06-30 05:10:24.522688 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 192.168.122.1"
,,,,,,,0,,"network_utils.c",210,
2016-06-30 05:10:54.603726 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 127.0.0.1",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:10:54.603769 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 2.10.1.71",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:10:54.603778 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 192.168.122.1"
,,,,,,,0,,"network_utils.c",210,
2016-06-30 05:11:24.625919 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 127.0.0.1",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:11:24.626088 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 2.10.1.71",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:11:24.626129 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 192.168.122.1"
,,,,,,,0,,"network_utils.c",210,

我还尝试检查“gp_segment_configuration”

gpadmin=# select * from gp_segment_configuration
gpadmin-# ;
 registration_order | role | status | port  |     hostname      |  address  |            description
--------------------+------+--------+-------+-------------------+-----------+------------------------------------
                 -1 | s    | u      |  5432 | node2.localdomain | 2.10.1.72 |
                  0 | m    | u      |  5432 | node1             | node1     |
                  1 | p    | d      | 40000 | node1.localdomain | 2.10.1.71 | resource manager process was reset
(3 rows)

注意:在hawq-site.xml中,资源管理类型从下拉列表中选择为“ STANDALONE ”而不是“YARN”。

任何人都有任何线索,这里有什么问题? 在此先感谢!!!

2 个答案:

答案 0 :(得分:1)

我之前遇到过这样的问题。在这种环境中,每个段都有一个共同的IP地址。因此,请检查段节点是否具有相同的IP地址。 对于hawq2.0.0,它会考虑具有相同IP地址的段作为一个节点,这就是为什么你有3个段节点,但在gp_segment_configuration中,只注册了一个段节点。您可以删除重复的IP地址,然后重试。

此问题已通过最新的hawq代码修复。

答案 1 :(得分:0)

感谢大家的回复。

centOS中的底层操作系统及其在vCloud上的操作系统。正如所建议的那样,我已经完成了包含3个段的所有3个数据节点的IP配置。这些节点没有使用相同的网络(IP)。但在进一步调查时,我通过 ifconfig 找到了" eth1" &安培; " LO"另一组配置出现在" vibr0 "

这" vibr0"在所有段节点中都是相同的,这导致了问题。我从所有节点中删除了它,然后插入查询

以下是ifconfig的结果,并解决了删除的问题" vibr0"来自所有细分节点。

eth1链接封装:以太网HWaddr 00:50:56:01:31:26           inet addr:2.10.1.74 Bcast:2.10.3.255掩码:255.255.252.0           inet6 addr:fe80 :: 250:56ff:fe01:3126/64范围:链接           UP BROADCAST RUNNING MULTICAST MTU:1500公制:1           RX数据包:426157错误:0丢弃:0超限:0帧:0           TX数据包:259592错误:0丢弃:0超出:0载波:0           碰撞:0 txqueuelen:1000           RX字节:361465764(344.7 MiB)TX字节:216951933(206.9 MiB)

lo link encap:Local Loopback           inet addr:127.0.0.1掩码:255.0.0.0           inet6 addr::: 1/128范围:主机           UP LOOPBACK RUNNING MTU:65536公制:1           RX数据包:6个错误:0丢弃:0超限:0帧:0           TX数据包:6个错误:0丢弃:0超出:0载波:0           碰撞:0 txqueuelen:0           RX字节:416(416.0 b)TX字节:416(416.0 b)

virbr0链接封装:以太网HWaddr 52:54:00:DC:EE:00           inet addr:192.168.122.1 Bcast:192.168.122.255掩码:255.255.255.0           UP BROADCAST RUNNING MULTICAST MTU:1500公制:1           RX数据包:0错误:0丢弃:0超限:0帧:0           TX数据包:0错误:0丢弃:0溢出:0载波:0           碰撞:0 txqueuelen:0           RX字节:0(0.0 b)TX字节:0(0.0 b)