我在cassandra 3.9中有一个计数器表
create table counter_table ( id text, hour_no int, platform text, type text, title text,
count_time counter,
PRIMARY KEY (id, hour_no, platform, type , title));
我的spark(2.1.0)scala(2.11)代码是
import com.datastax.driver.core.{ConsistencyLevel, DataType}
import com.datastax.spark.connector.writer.WriteConf
val writeConf = WriteConf(consistencyLevel = ConsistencyLevel.ONE, ifNotExists = true)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "false").option("inferSchema", "true").load("csv_file_path")
val newNames = Seq("id" , "hour_no" , "platform" , "type" , "title" , "count_time")
val dfRenamed = df.toDF(newNames: _*)
dfRenamed.write.format("org.apache.spark.sql.cassandra").
mode(SaveMode.Append).options(Map( "table" -> "counter_table", "keyspace" -> "key1",
"output.consistency.level" -> "LOCAL_ONE", "output.ifNotExists" -> "true" )).save()
火花代码给出了一致性错误
Caused by: com.datastax.driver.core.exceptions.WriteFailureException:
Cassandra failure during write query at consistency LOCAL_QUORUM (2 responses were required but only 1 replica responded, 1 failed)
如何在DataFrame中指定ONE的一致性
答案 0 :(得分:2)
您的两个参数都缺少开头
所有参数都应以spark.cassandra为前缀。
但是你有第二个问题。
由于IF NOT EXISTS
使用PAXOS,因此无法对SERIAL
以外的任何一致性级别执行ONE
查询。这意味着你不应该define([
'jquery', 'underscore', 'backbone', 'text!templates/products.html'
], function(
$, _, Backbone, tmpl
) {
return Backbone.View.extend({
events: {
"scroll": "scroll1",
},
scroll1: function() {
console.log("Inside scroll function");
}
})
});
更新:我现在知道可以用Paxos CL做一些非常危险的事情,所以可以对交易的部分强制使用不同的CL,但你不应该这样做,因为你基本上会打破你想要的所有保证在第一时间检查。