引导新节点后完全不平衡的DC

时间:2014-05-15 17:15:02

标签: cassandra datastax-enterprise nodetool

我刚刚在我的Cassandra DC中添加了一个新节点。以前,我的拓扑结构如下:

  1. DC Cassandra:1个节点
  2. DC Solr:5个节点
  3. 当我为Cassandra DC引导第二个节点时,我注意到要传输的总字节数几乎与现有节点的负载一样大(916gb到流;现有cassandra节点的负载为956gb)。不过,我允许引导程序继续进行。它在几个小时前完成,现在我的恐惧得到了证实:Cassandra DC完全不平衡。

    Nodetool状态显示以下内容:

    Datacenter: Solr
    ================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address                                        Load       Owns (effective)  Host ID                               Token                                    Rack
    UN  solr node4                                     322.9 GB   40.3%             30f411c3-7419-4786-97ad-395dfc379b40  -8998044611302986942                     rack1
    UN  solr node3                                     233.16 GB  39.7%             c7db42c6-c5ae-439e-ab8d-c04b200fffc5  -9145710677669796544                     rack1
    UN  solr node5                                     252.42 GB  41.6%             2d3dfa16-a294-48cc-ae3e-d4b99fbc947c  -9004172260145053237                     rack1
    UN  solr node2                                     245.97 GB  40.5%             7dbbcc88-aabc-4cf4-a942-08e1aa325300  -9176431489687825236                     rack1
    UN  solr node1                                     402.33 GB  38.0%             12976524-b834-473e-9bcc-5f9be74a5d2d  -9197342581446818188                     rack1
    Datacenter: Cassandra
    =====================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address                                        Load       Owns (effective)  Host ID                               Token                                    Rack
    UN  cs node2                                       705.58 GB  99.4%             fa55e0bb-e460-4dc1-ac7a-f71dd00f5380  -9114885310887105386                     rack1
    UN  cs node1                                      1013.52 GB  0.6%              6ab7062e-47fe-45f7-98e8-3ee8e1f742a4  -3083852333946106000                     rack1
    

    注意Cassandra DC中的'Owns'列:node2拥有99.4%而node1拥有0.6%(尽管node2拥有比node1更小的'Load')。我希望他们每人拥有50%,但这就是我得到的。我不知道是什么造成的。我记得的是,当我启动新节点的引导程序时,我正在Solr node1中运行完全修复。到目前为止,修复仍在运行(我认为它实际上在新节点完成引导时重新启动)

    我该如何解决这个问题? (维修?)

    在Cassandra DC处于此状态时批量加载新数据是否安全?

    其他一些信息:

    1. DSE 4.0.3(Cassandra 2.0.7)
    2. NetworkTopologyStrategy
    3. Cassandra DC的RF1; Solr DC中的RF2
    4. DSE自动分配的DC
    5. 启用了Vnodes
    6. 新节点的配置在现有节点的配置之后建模;所以或多或少是正确的
    7. 修改

      事实证明我在cs-node1中也无法运行清理。我遇到以下异常:

      Exception in thread "main" java.lang.AssertionError: [SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-18509-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-18512-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38320-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38325-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38329-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38322-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38330-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38331-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38321-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38323-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38344-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38345-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38349-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38348-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38346-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-13913-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-13915-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38389-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-39845-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38390-Data.db')]
          at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2115)
          at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2112)
          at org.apache.cassandra.db.ColumnFamilyStore.runWithCompactionsDisabled(ColumnFamilyStore.java:2094)
          at org.apache.cassandra.db.ColumnFamilyStore.markAllCompacting(ColumnFamilyStore.java:2125)
          at org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:214)
          at org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:265)
          at org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1105)
          at org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2220)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          at java.lang.reflect.Method.invoke(Method.java:606)
          at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
          at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          at java.lang.reflect.Method.invoke(Method.java:606)
          at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
          at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
          at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
          at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
          at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
          at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
          at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
          at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
          at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
          at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
          at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
          at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
          at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          at java.lang.reflect.Method.invoke(Method.java:606)
          at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
          at sun.rmi.transport.Transport$1.run(Transport.java:177)
          at sun.rmi.transport.Transport$1.run(Transport.java:174)
          at java.security.AccessController.doPrivileged(Native Method)
          at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
          at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
          at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
          at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
          at java.lang.Thread.run(Thread.java:745)
      

      修改

      Nodetool状态输出(没有键空间)

      Note: Ownership information does not include topology; for complete information, specify a keyspace
      Datacenter: Solr
      ================
      Status=Up/Down
      |/ State=Normal/Leaving/Joining/Moving
      --  Address                                        Load       Owns   Host ID                               Token                                    Rack
      UN  solr node4                                     323.78 GB  17.1%  30f411c3-7419-4786-97ad-395dfc379b40  -8998044611302986942                     rack1
      UN  solr node3                                     236.69 GB  17.3%  c7db42c6-c5ae-439e-ab8d-c04b200fffc5  -9145710677669796544                     rack1
      UN  solr node5                                     256.06 GB  16.2%  2d3dfa16-a294-48cc-ae3e-d4b99fbc947c  -9004172260145053237                     rack1
      UN  solr node2                                     246.59 GB  18.3%  7dbbcc88-aabc-4cf4-a942-08e1aa325300  -9176431489687825236                     rack1
      UN  solr node1                                     411.25 GB  13.9%  12976524-b834-473e-9bcc-5f9be74a5d2d  -9197342581446818188                     rack1
      Datacenter: Cassandra
      =====================
      Status=Up/Down
      |/ State=Normal/Leaving/Joining/Moving
      --  Address                                        Load       Owns   Host ID                               Token                                    Rack
      UN  cs node2                                       709.64 GB  17.2%  fa55e0bb-e460-4dc1-ac7a-f71dd00f5380  -9114885310887105386                     rack1
      UN  cs node1                                      1003.71 GB  0.1%   6ab7062e-47fe-45f7-98e8-3ee8e1f742a4  -3083852333946106000                     rack1
      

      来自node1的Cassandra yaml:https://www.dropbox.com/s/ptgzp5lfmdaeq8d/cassandra.yaml(与node2的区别仅在于listen_address和commitlog_directory)

      关于CASSANDRA-6774,它有点不同,因为我没有停止先前的清理。虽然我认为我现在通过开始擦洗(仍在进行中)而不是首先重新启动节点而采取错误的路线,就像他们建议的解决方法一样。

      更新时间(2014/04/19):

      执行以下操作后,

      nodetool cleanup仍然失败并出现断言错误:

      1. 完全清理键区
      2. 完整群集重启
      3. 我现在正在对cs-node1

        中的键空间进行全面修复

        更新(2014/04/20):

        任何修复cs-node1中主键空间的尝试都会失败:

          

        失去通知。您应该检查服务器日志以获取密钥空间的修复状态

        我刚刚看到这个(dsetool戒指的输出)

        Note: Ownership information does not include topology, please specify a keyspace.
        Address          DC           Rack         Workload         Status  State    Load             Owns                 VNodes
        solr-node1       Solr         rack1        Search           Up      Normal   447 GB           13.86%               256
        solr-node2       Solr         rack1        Search           Up      Normal   267.52 GB        18.30%               256
        solr-node3       Solr         rack1        Search           Up      Normal   262.16 GB        17.29%               256
        cs-node2         Cassandra    rack1        Cassandra        Up      Normal   808.61 GB        17.21%               256
        solr-node5       Solr         rack1        Search           Up      Normal   296.14 GB        16.21%               256
        solr-node4       Solr         rack1        Search           Up      Normal   340.53 GB        17.07%               256
        cd-node1         Cassandra    rack1        Cassandra        Up      Normal   896.68 GB        0.06%                256
        Warning:  Node cs-node2 is serving 270.56 times the token space of node cs-node1, which means it will be using 270.56 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management
        Warning:  Node solr-node2 is serving 1.32 times the token space of node solr-node1, which means it will be using 1.32 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management
        

        KEYSPACE感知:

        Address          DC           Rack         Workload         Status  State    Load             Effective-Ownership  VNodes
        solr-node1       Solr         rack1        Search           Up      Normal   447 GB           38.00%               256
        solr-node2       Solr         rack1        Search           Up      Normal   267.52 GB        40.47%               256
        solr-node3       Solr         rack1        Search           Up      Normal   262.16 GB        39.66%               256
        cs-node2         Cassandra    rack1        Cassandra        Up      Normal   808.61 GB        99.39%               256
        solr-node5       Solr         rack1        Search           Up      Normal   296.14 GB        41.59%               256
        solr-node4       Solr         rack1        Search           Up      Normal   340.53 GB        40.28%               256
        cs-node1         Cassandra    rack1        Cassandra        Up      Normal   896.68 GB        0.61%                256
        Warning:  Node cd-node2 is serving 162.99 times the token space of node cs-node1, which means it will be using 162.99 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management
        

        这是一个强有力的指标,表明cs-node2引导的方式有问题(正如我在帖子开头所描述的那样)。

1 个答案:

答案 0 :(得分:0)

看起来您的问题是您很可能在现有节点上从单个令牌切换到vnodes。所以他们所有的代币都是连续的。这在当前的Cassandra版本中实际上是不可能的,因为它太难以正确。

修复它并能够添加新节点的唯一真正方法是停用您添加的第一个新节点,然后按照当前文档切换到单个节点的vnodes,这基本上就是您需要制作品牌新数据中心使用全新的vnodes,其中包含节点,然后停用现有节点。