Cassandra Hector - UnavailableException

时间:2013-09-27 07:04:03

标签: java cassandra hector

我正在尝试使用Hector插入记录,并且我不时会收到此错误:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level.
    at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:59)
    at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:264)
    at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
    at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
    at me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115)
    at me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163)
    at me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69)
    at ustocassandra.USToCassandraHector.consumer(USToCassandraHector.java:271)
    at ustocassandra.USToCassandraHector.access$100(USToCassandraHector.java:41)
    at ustocassandra.USToCassandraHector$2.run(USToCassandraHector.java:71)
    at java.lang.Thread.run(Thread.java:724)
Caused by: UnavailableException()
    at org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20841)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
    at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964)
    at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950)
    at me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246)
    at me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:243)
    at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104)
    at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)
    ... 9 more

我知道通常的解释是没有足够的节点,但事实并非如此。我的所有节点都已启动:

./nodetool ring
Note: Ownership information does not include topology; for complete information, specify a keyspace

Datacenter: DC1
==========
Address         Rack        Status State   Load            Owns                Token
                                                                               4611686018427388000
172.16.217.222  RAC1        Up     Normal  353.36 MB       25.00%              -9223372036854775808
172.16.217.223  RAC2        Up     Normal  180.84 MB       25.00%              -4611686018427388000
172.16.217.224  RAC3        Up     Normal  260.34 MB       25.00%              -2
172.16.217.225  RAC4        Up     Normal  222.71 MB       25.00%              4611686018427388000

我正在插入包含20个线程的记录(也许我应该使用更少?据我所知,错误将在这种情况下重载,而不是不可用)。我正在使用ONE的写一致性。我正在使用AutoDiscoveryAtStartup和LeastActiveBalancingPolicy。复制因子是2。

我正在使用Cassandra 1.2.8(我尝试使用2.0并且它是相同的)。

从一开始就不会发生错误。我通常设法在收到错误之前插入大约200万条记录。我的代码设置为在发生错误时重试。经过几十次重试后,插入通常会成功。在那之后,它再次适用于数百万次插入,然后我再次得到错误并且循环继续。

可能是因为我设置gc_grace = 60?无论如何,我没有每60秒得到错误所以我认为这不是原因。

你能否就这个错误的原因给我一些建议,我该怎么办?

编辑:

'nodetool tpstats'说我删除了一些消息:

Message type           Dropped
RANGE_SLICE                  0
READ_REPAIR                  0
BINARY                       0
READ                         0
MUTATION                    11
_TRACE                       0

我在日志文件中看到以下警告:

 WARN [ScheduledTasks:1] 2013-09-30 09:20:16,633 GCInspector.java (line 136) Heap is 0.853986836999536 full.  You may need to reduce memtable and/or cache sizes.  Cassandra is now reducing cache sizes to free up memory.  Adjust reduce_cache_sizes_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-09-30 09:20:16,634 AutoSavingCache.java (line 185) Reducing KeyCache capacity from 1073741824 to 724 to reduce memory pressure
 WARN [ScheduledTasks:1] 2013-09-30 09:20:16,634 GCInspector.java (line 142) Heap is 0.853986836999536 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-09-30 09:20:16,634 StorageService.java (line 3618) Flushing CFS(Keyspace='us', ColumnFamily='my_cf') to relieve memory pressure

这是在Hector抛出Unavailable异常的确切时间。所以,它可能是一个与内存相关的问题。 我想我会尝试警告说:减少记忆大小。

1 个答案:

答案 0 :(得分:0)

这可能是因为您的服务器过载,因此某些节点没有响应。没有OverloadedException(重载节点看起来就像一个不可用的节点)。

你应该检查你的Cassandra日志 - 是否有关于堆满的警告? nodetool tpstats中是否列出了已删除的邮件?服务器上的CPU负载是多少?