我有一个失败的scalatest套件,我已将原因缩小到测试前运行的代码并截断数据表。如果我运行以下代码,我可以重新创建问题
session.execute(s"TRUNCATE ${dao.tableName};")
session.execute(s"TRUNCATE ${dao.tableName};")
抛出:
Error during truncate: Cannot achieve consistency level ALL
com.datastax.driver.core.exceptions.TruncateException: Error during truncate: Cannot achieve consistency level ALL
at com.datastax.driver.core.exceptions.TruncateException.copy(TruncateException.java:35)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:271)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:187)
at com.datastax.driver.core.Session.execute(Session.java:126)
at com.datastax.driver.core.Session.execute(Session.java:77)
at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply$mcV$sp(PostingGroupDaoTest.scala:43)
at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply(PostingGroupDaoTest.scala:39)
at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply(PostingGroupDaoTest.scala:39)
at org.scalatest.FunSuite$$anon$1.apply(FunSuite.scala:1265)
at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
at ledger.testsupport.JUnitFunSuiteTest.withFixture(JUnitFunSuiteTest.scala:10)
at org.scalatest.FunSuite$class.invokeWithFixture$1(FunSuite.scala:1262)
at ...
Caused by: com.datastax.driver.core.exceptions.TruncateException: Error during truncate: Cannot achieve consistency level ALL
at com.datastax.driver.core.Responses$Error.asException(Responses.java:91)
at com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:122)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:224)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:361)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:510)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
我正在使用数据存储驱动程序2.0.0-RC2,并且有三个节点的集群。
关于这里出了什么问题的任何想法?
答案 0 :(得分:1)
原来这是由于磁盘空间不足而导致状态不一致的节点的问题
答案 1 :(得分:-1)
这是因为一致性水平。您不能使用一致性级别ALL截断所有节点数据。你必须将一致性级别设置为1或2,然后它会截断一个节点上的所有数据,此时此节点将截断来自其他节点的所有数据。