可悲的是,我发现了一个与我非常相似的问题,但没有真正的回答nohostavailableexception-while-bulk-loading-data-into-cassandra
我使用安装在具有8个内核和8 GB RAM的RHEL 5 VM上的Cassandra 2.0.8。 我现在正在将它作为单个节点运行。
我试图通过从我的oracle数据库迁移数据来初始化它。所以我有一个程序从oracle表中选择然后插入cassandra(在一个循环中)Biggest表有500,000条记录。
在操作过程中,我的程序因读取超时错误而死亡。我尝试在cassandra.yaml中增加所有超时值,但它没有帮助。
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.DriverException: Timeout during read))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:92)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.DriverException: Timeout during read))
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
我的cassandra.yaml超时设置是
# How long the coordinator should wait for read operations to complete read_request_timeout_in_ms: 15000
# was 5000
# How long the coordinator should wait for seq or index scans to complete range_request_timeout_in_ms: 20000
# was 10000
# How long the coordinator should wait for writes to complete write_request_timeout_in_ms: 30000
# was 20000
# How long a coordinator should continue to retry a CAS operation
# that contends with other proposals for the same row cas_contention_timeout_in_ms: 1000
# How long the coordinator should wait for truncates to complete
# (This can be much longer, because unless auto_snapshot is disabled
# we need to flush first so we can snapshot before removing the data.) truncate_request_timeout_in_ms: 300000
# was 60000
# The default timeout for other, miscellaneous operations request_timeout_in_ms: 20000
# was 10000
任何人都知道如何解决此问题?或者将数据从一个地方迁移到另一个地方的更好方法(请注意,有些表并不是那么容易迁移,我需要在插入之前做一些额外的查询)