我有一个scenaio,可以使用spark -sql将数据写入cassandra表中。 我有一个3节点的Cassandra群集。 我创建了具有复制因子2的表,如下所示:
CREATE TABLE keyspaceRf2. c_columnar (
id int,
company_id int,
dd date,
c_code text,
year int,
quarter int,
etc ....etc...
PRIMARY KEY (( id, year, quarter), dd, c_code, company_id )
) WITH CLUSTERING ORDER BY ( dd DESC, c_code DESC, company_id DESC);
我正在尝试将数据插入keyspaceRf2。 c_columnar表在spark-cluster上使用spark-job。 数据已正确插入。 但是为了验证插入表中的记录数,我正在运行一个计数查询,如下所示:
val countDf = loadFromCassandra(c_reader,"keyspaceRf2", " c_columnar");
println ( " count = " + countDf.count()
def loadFromCassandra( c_reader: DataFrameReader , keyspace: String , col_Name:String): DataFrame = {
c_reader
.options(Map( "table" -> col_Name, "keyspace" -> keyspace ))
.load()
}
执行上述代码时,将引发如下错误
错误:
TaskSetManager:66 - Lost task 33.0 in stage 18.0 : java.io.IOException: Exception during execution of SELECT count(*) FROM "keyspaceRf2"." c_columnar" WHERE token("id", " year", " quarter") > ? AND token("id", " year", " quarter") <= ? ALLOW FILTERING: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:350)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:367)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:367)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:37)
at com.sun.proxy.$Proxy23.execute(Unknown Source)
at com.datastax.spark.connector.cql.DefaultScanner.scan(Scanner.scala:34)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:342)
... 15 more
我在这里做什么错了?
答案 0 :(得分:2)
请仔细检查:
SimpleStrategy
几乎总是错误的。当本地DC中甚至没有单个副本时,另一个错误是使用LOCAL_ONE
而不是ONE
CL。在NetworkTopologyStrategy
选项中忘记或拼写错误的DC名称或使用可能决定将特定范围内的所有副本存储在另一个DC中的SimpleStrategy
都不会导致本地DC中的副本。