为什么cassandra故障转移不起作用?客户端重新生成错误并在节点死亡时丢失数据

时间:2017-08-10 03:22:43

标签: python cassandra client

我使用python来测试cassandra故障转移,使用cassandra驱动程序datastax python driver

群集有5个节点,密钥空间设置为RF = 3

客户端consistency_level设置为QUORUM

cluster = Cluster(['172.17.35.45'])
session = cluster.connect('bigtest')
session.default_consistency_level = 4

和插入脚本:

for seq in range(1000000, 2000000, 1):
timeStamp=int(round(time.time() * 1000))
c_data=uuid.uuid1()
upseq=None
try:
    session.execute(
    """
    INSERT INTO bigtest (seq, insert_time, c_data,upseq)
    VALUES (%(seq)s, %(insert_time)s, %(content_data)s,%(upseq)s)
    """,
    {'seq': seq, 'insert_time': timeStamp, 'content_data': c_data,'upseq': upseq}
    )
except Exception,e:
    logging.error(e)
    logging.error(seq) 
  1. 运行脚本,杀死一个节点的cassandra进程使用'ps aux | grep cassandra | grep -v grep | awk'{print $ 2}'| xargs kill -9'和nodetool status show server node status是DN。然后脚本工作正常,没有数据丢失

  2. 运行脚本,拔掉一个节点的以太网线。然后客户端报告错误,数据未插入

  3. 2017-08-10 09:50:35,572 [ERROR] root: errors={'172.17.35.46': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=172.17.35.46 2017-08-10 09:50:35,572 [ERROR] root: 1065075

    这是完整的日志,没有尝试exp:

    Traceback (most recent call last): File "insert_test.py", line 28, in <module> {'seq': seq, 'insert_time': timeStamp, 'content_data': c_data,'upseq': upseq} File "/usr/lib64/python2.7/site-packages/cassandra/cluster.py", line 2016, in execute return self.execute_async(query, parameters, trace, custom_payload, timeout, execution_profile, paging_state).result() File "/usr/lib64/python2.7/site-packages/cassandra/cluster.py", line 3826, in result raise self._final_exception cassandra.OperationTimedOut: errors={'172.17.35.46': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=172.17.35.46

    为什么以及如何解决它谢谢大家。

0 个答案:

没有答案