我是新来的火花。我在cassandra中有下表:
CREATE TABLE cust_actions (
orgid text,
empid int,
custid int,
date timestamp,
action text
PRIMARY KEY (orgid, empid, custid, date)
) WITH CLUSTERING ORDER BY (empid ASC, custid ASC, date DESC)
此表包含员工对客户执行的每个操作的数据。该表每天获得超过一千万次插入。我有一个3节点的Cassandra集群,在18台核心计算机上运行,每台核心计算机32g内存。
我想每天汇总数据,即在某一天对一个客户执行了多少操作。为此,我创建了另一个表:
CREATE TABLE daily_cust_actions (
custid int,
date date,
action text,
count int,
PRIMARY KEY (custid, date, action)
) WITH CLUSTERING ORDER BY (date ASC, action ASC)
为此,我想使用spark(请指出这是错误的,还是还有其他替代方法)。我正在其中一台cassandra机器(上面提到)上运行spark,其中主服务器和从服务器具有9个执行程序,这些执行程序每个都有1g ram和2个内核。
表的大小大约为70克。我无法汇总此数据。不过,这对于较小的数据集也可以正常工作。这是我的Spark脚本:
object DailyAggregation {
def main(args: Array[String]): Unit = {
val conf = new SparkConf(true).set("spark.cassandra.connection.host", "host1,host2,host3")
.set("spark.cassandra.auth.username", "cassandra")
.set("spark.cassandra.auth.password", "cassandra")
.set("spark.cassandra.input.split.size_in_mb", "10") //have tried multiple options here
val sc = new SparkContext("spark://host", "spark-cassandra", conf)
val rdd = sc.cassandraTable("mykeyspace","cust_actions")
val formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd")
val df = new SimpleDateFormat("yyyy-MM-dd")
val startDate = df.parse("2018-08-13")
val endDate = df.parse("2018-09-14")
sc.parallelize(
rdd.select("custid", "date", "action")
.where("date >= ? and date < ?", startDate, endDate)
.keyBy(row => (
row.getInt("custid"),
df.format(row.getLong("date")),
row.getString("action"))).map { case (key, value) => (key, 1) }
.reduceByKey(_ + _).collect()
.map { case (key, value) => (key._1, key._2, key._3, value) })
.saveToCassandra("mykeyspace", "daily_cust_actions")
sc.stop()
}
}
我尝试了不同的方法,增加/减少了内存/执行程序,增加/减少了spark.cassandra.input.split.size_in_mb
值,并调整了一些spark环境变量。但是每次我得到一个不同的错误。它显示了两个阶段,第一阶段始终运行平稳,而第二阶段总是失败。
我看到了很多不同的错误。目前,我收到以下错误:
2018-09-15 16:36:05 INFO TaskSetManager:54 - Task 158.1 in stage 1.1 (TID 1293) failed, but the task will not be re-executed (either because t
he task failed with a shuffle data fetch failure, so the previous stage needs to be re-run, or because a different copy of the task has already
succeeded).
2018-09-15 16:36:05 WARN TaskSetManager:66 - Lost task 131.1 in stage 1.1 (TID 1286, 127.0.0.1, executor 18): FetchFailed(null, shuffleId=0, m
apId=-1, reduceId=131, message=
org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
这里的任何帮助将不胜感激。