无法找到引发一致性LOCAL_ONE的cassandra表的count(*)(需要1个响应,但仅响应了0个副本)

时间:2019-01-10 19:42:17

标签: apache-spark apache-spark-sql datastax datastax-enterprise databricks

我有一个scenaio,可以使用spark -sql将数据写入cassandra表中。 我有一个3节点的Cassandra群集。 我创建了具有复制因子2的表,如下所示:

CREATE TABLE keyspaceRf2. c_columnar (
    id int,
    company_id int,
    dd date,
    c_code text,
     year int,
     quarter int,
        etc ....etc...
    PRIMARY KEY (( id,  year,  quarter), dd, c_code, company_id )
) WITH CLUSTERING ORDER BY ( dd DESC, c_code DESC, company_id DESC);

我正在尝试将数据插入keyspaceRf2。 c_columnar表在spark-cluster上使用spark-job。  数据已正确插入。  但是为了验证插入表中的记录数,我正在运行一个计数查询,如下所示:

 val countDf = loadFromCassandra(c_reader,"keyspaceRf2", " c_columnar");

println ( " count = " + countDf.count()


def loadFromCassandra( c_reader: DataFrameReader , keyspace: String , col_Name:String): DataFrame = {

        c_reader
        .options(Map( "table" -> col_Name, "keyspace" -> keyspace ))
        .load()
  }

执行上述代码时,将引发如下错误

错误:

TaskSetManager:66 - Lost task 33.0 in stage 18.0 : java.io.IOException: Exception during execution of SELECT count(*) FROM "keyspaceRf2"." c_columnar" WHERE token("id", " year", " quarter") > ? AND token("id", " year", " quarter") <= ?   ALLOW FILTERING: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
        at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:350)
        at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:367)
        at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:367)
        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
        at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
        at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:85)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:37)
        at com.sun.proxy.$Proxy23.execute(Unknown Source)
        at com.datastax.spark.connector.cql.DefaultScanner.scan(Scanner.scala:34)
        at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:342)
        ... 15 more

我在这里做什么错了?

1 个答案:

答案 0 :(得分:2)

请仔细检查:

  • 键空间复制设置-复制因子是在键空间而非表上设置的。确保为群集拓扑使用正确的复制策略。在多DC群集中使用默认的SimpleStrategy几乎总是错误的。当本地DC中甚至没有单个副本时,另一个错误是使用LOCAL_ONE而不是ONE CL。在NetworkTopologyStrategy选项中忘记或拼写错误的DC名称或使用可能决定将特定范围内的所有副本存储在另一个DC中的SimpleStrategy都不会导致本地DC中的副本。
  • 集群中节点的状态-计数查询可能需要访问许多节点。检查您的群集运行状况。检查所有节点的状态均为UN(正常)。
  • 您要连接的节点位于正确的DC中-对于具有多DC群集和LOCAL_ *一致性级别的节点,连接至正确的DC非常重要。