如何在集群ejabberd环境中解决Mnesia - inconsistent_database错误?

时间:2016-09-26 23:02:50

标签: ejabberd mnesia

我们有一个ejabberd集群,它由两台主机组成,我们在重新启动主机时遇到了这些主机。 我们看到登录了inconsistent_database错误。但是,我们无法最终分析配置或module_init执行中实际可能导致该行为的内容。 删除node1上的mnesia可能有助于解决此问题。然而,出于管理目的,这是不可取的。

想请求审核以下数据,以及一些可能导致行为的配置和反馈,以及如何减轻这种行为。

提前谢谢。

环境配置如下:

  • Ejabberd Verison:16.03
  • 主持人数量:2
  • odbc_type:MySQL

记录错误:

    ** ERROR ** mnesia_event got {inconsistent_database, running_partitioned_network, other_node}

Repro步骤:

  • 重新启动node1
  • 重新启动node2

注意:如果主机以相反的顺序重新启动,则不会重新启动。

MnesiaInfo:

在两个节点上似乎有两个具有不同条目大小和可能内容的模式: muc_online_room和我们的自定义架构在下面重命名为SCRUBBED_CUSTOM_FEATURE_SCHEMA_NAME:

节点1:

---> Processes holding locks <--- 
---> Processes waiting for locks <--- 
---> Participant transactions <--- 
---> Coordinator transactions <---
---> Uncertain transactions <--- 
---> Active tables <--- 
mod_register_ip: with 0        records occupying 299      words of mem
muc_online_room: with 348      records occupying 10757    words of mem
http_bind      : with 0        records occupying 299      words of mem
carboncopy     : with 0        records occupying 299      words of mem
oauth_token    : with 0        records occupying 299      words of mem
session        : with 0        records occupying 299      words of mem
session_counter: with 0        records occupying 299      words of mem
sql_pool       : with 10       records occupying 439      words of mem
route          : with 4        records occupying 405      words of mem
iq_response    : with 0        records occupying 299      words of mem
temporarily_blocked: with 0        records occupying 299      words of mem
s2s            : with 0        records occupying 299      words of mem
route_multicast: with 0        records occupying 299      words of mem
shaper         : with 2        records occupying 321      words of mem
access         : with 28       records occupying 861      words of mem
acl            : with 6        records occupying 459      words of mem
local_config   : with 32       records occupying 1293     words of mem
schema         : with 19       records occupying 2727     words of mem
SCRUBBED_CUSTOM_FEATURE_SCHEMA_NAME     : with 2457     records occupying 49953    words of mem
===> System info in version "4.12.5", debug level = none <===
opt_disc. Directory "SCRUBBED_LOCATION" is used.
use fallback at restart = false
running db nodes   = [SCRUBBED_NODE2,SCRUBBED_NODE1]
stopped db nodes   = [] 
master node tables = []
remote             = []
ram_copies         = [access,acl,carboncopy,http_bind,iq_response,
                      local_config,mod_register_ip,muc_online_room,route,
                      route_multicast,s2s,session,session_counter,shaper,
                      sql_pool,temporarily_blocked,SCRUBBED_CUSTOM_FEATURE_SCHEMA_NAME]
disc_copies        = [oauth_token,schema]
disc_only_copies   = []
[{'SCRUBBED_NODE1',disc_copies},
 {'SCRUBBED_NODE2',disc_copies}] = [schema,
                                                                  oauth_token]
[{'SCRUBBED_NODE1',ram_copies}] = [local_config,
                                                                 acl,access,
                                                                 shaper,
                                                                 sql_pool,
                                                                 mod_register_ip]
[{'SCRUBBED_NODE1',ram_copies},
 {'SCRUBBED_NODE2',ram_copies}] = [route_multicast,
                                                                 s2s,
                                                                 temporarily_blocked,
                                                                 iq_response,
                                                                 route,
                                                                 session_counter,
                                                                 session,
                                                                 carboncopy,
                                                                 http_bind,
                                                                 muc_online_room,
                                                                 SCRUBBED_CUSTOM_FEATURE_SCHEMA_NAME]
2623 transactions committed, 35 aborted, 26 restarted, 60 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
ok

节点2:

mnesia:info().
---> Processes holding locks <--- 
---> Processes waiting for locks <--- 
---> Participant transactions <--- 
---> Coordinator transactions <---
---> Uncertain transactions <--- 
---> Active tables <--- 
mod_register_ip: with 0        records occupying 299      words of mem
muc_online_room: with 348      records occupying 8651     words of mem
http_bind      : with 0        records occupying 299      words of mem
carboncopy     : with 0        records occupying 299      words of mem
oauth_token    : with 0        records occupying 299      words of mem
session        : with 0        records occupying 299      words of mem
session_counter: with 0        records occupying 299      words of mem
route          : with 4        records occupying 405      words of mem
sql_pool       : with 10       records occupying 439      words of mem
iq_response    : with 0        records occupying 299      words of mem
temporarily_blocked: with 0        records occupying 299      words of mem
s2s            : with 0        records occupying 299      words of mem
route_multicast: with 0        records occupying 299      words of mem
shaper         : with 2        records occupying 321      words of mem
access         : with 28       records occupying 861      words of mem
acl            : with 6        records occupying 459      words of mem
local_config   : with 32       records occupying 1293     words of mem
schema         : with 19       records occupying 2727     words of mem
SCRUBBED_CUSTOM_FEATURE_SCHEMA_NAME     : with 2457     records occupying 38232    words of mem
===> System info in version "4.12.5", debug level = none <===
opt_disc. Directory "SCRUBBED_LOCATION" is used.
use fallback at restart = false
running db nodes   = ['SCRUBBED_NODE1','SCRUBBED_NODE2']
stopped db nodes   = [] 
master node tables = []
remote             = []
ram_copies         = [access,acl,carboncopy,http_bind,iq_response,
                      local_config,mod_register_ip,muc_online_room,route,
                      route_multicast,s2s,session,session_counter,shaper,
                      sql_pool,temporarily_blocked,SCRUBBED_CUSTOM_FEATURE_SCHEMA_NAME]
disc_copies        = [oauth_token,schema]
disc_only_copies   = []
[{'SCRUBBED_NODE1',disc_copies},
 {'SCRUBBED_NODE2',disc_copies}] = [schema,
                                                                  oauth_token]
[{'SCRUBBED_NODE1',ram_copies},
 {'SCRUBBED_NODE2',ram_copies}] = [route_multicast,
                                                                 s2s,
                                                                 temporarily_blocked,
                                                                 iq_response,
                                                                 route,
                                                                 session_counter,
                                                                 session,
                                                                 carboncopy,
                                                                 http_bind,
                                                                 muc_online_room,
                                                                 SCRUBBED_CUSTOM_FEATURE_SCHEMA_NAME]
[{'SCRUBBED_NODE2',ram_copies}] = [local_config,
                                                                 acl,access,
                                                                 shaper,
                                                                 sql_pool,
                                                                 mod_register_ip]
2998 transactions committed, 18 aborted, 0 restarted, 99 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
ok

1 个答案:

答案 0 :(得分:0)

  

注意:如果主机以相反的顺序重新启动,则不会重新启动。

数据库不一致是为了保护数据。如果您按一个顺序停止了群集,则必须以相反的顺序重新启动它。否则,第一个节点停止,将记录有其他活动节点以及最新信息,以防止数据丢失。