慢速超时500毫秒/跨节点警告

时间:2019-04-17 15:28:12

标签: cassandra cassandra-3.0 datastax-java-driver

我有一个树节点Cassandra集群。

当我从Java Client请求大量数据时,服务器端出现以下警告:

 WARN SELECT * FROM [...] time 789 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 947 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 1027 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 819 msec - slow timeout 500 msec/cross-node

客户端,我遇到以下异常:

  

java.util.concurrent.ExecutionException:
  com.datastax.driver.core.exceptions.TransportException:
  [/x.y.z.a:9042]连接已关闭

我的服务器配置yaml如下:

 # How long the coordinator should wait for read operations to complete
 read_request_timeout_in_ms: 5000
 # How long the coordinator should wait for seq or index scans to complete
 range_request_timeout_in_ms: 10000
 # How long the coordinator should wait for writes to complete
 write_request_timeout_in_ms: 2000
 # How long the coordinator should wait for counter writes to complete
 counter_write_request_timeout_in_ms: 5000
 # How long a coordinator should continue to retry a CAS operation
 # that contends with other proposals for the same row
 cas_contention_timeout_in_ms: 1000
 # How long the coordinator should wait for truncates to complete
 # (This can be much longer, because unless auto_snapshot is disabled
 # we need to flush first so we can snapshot before removing the data.)
 truncate_request_timeout_in_ms: 60000
 # The default timeout for other, miscellaneous operations
 request_timeout_in_ms: 10000

找不到“ 500毫秒”超时的任何参考。那么如何调整这个超时时间呢?查询大量分区/数据时,是否有任何选项可以避免以Exception结尾?

作为旁注,我使用future以异步方式检索数据:

 import com.datastax.driver.core.ResultSetFuture;

1 个答案:

答案 0 :(得分:4)

默认slow_query_log_timeout_in_ms为500,它不是实际的超时时间,而只是一个通知/日志记录。如果您希望更高,可以在Yaml中进行更新。

500ms较慢,并且可能表明您的环境或查询中存在问题。尽管这种情况很少见,但它可能只是周期性的GC,可以通过客户端的推测性重试来缓解。