Spark SQL Cassandra如何处理timestamp空值?

时间:2015-01-29 15:54:58

标签: cassandra apache-spark apache-spark-sql

我目前正在使用带有Spark 1.2.0连接器的Apache Cassandra 2.1.2群集。对于一些初始测试,我需要通过spark-shell中的Spark SQL命令从Cassandra表中选择一些行。

我们在密钥空间 ks 中使用名为 tabletest 的表格。此表格包含例如 id(bigint) ts(时间戳)

这是我的火花脚本:

import com.datastax.spark.connector._
import org.apache.spark.sql.cassandra.CassandraSQLContext
val cc = new CassandraSQLContext(sc)
cc.setKeyspace("ks")
val rdd = cc.sql("SELECT id,ts FROM tabletest LIMIT 100")
rdd.toArray.foreach(println)

当我通过命令执行此脚本时:

spark-shell -i myscript

一切都可以,直到一行包含ts单元格的空值。 如果有一个ts值为空的行,我得到了几个与spark等待长值(8字节)并且没有字节的事实有关的异常。即使我尝试计算行数而不显示行,我也遇到了同样的问题。

15/01/29 15:21:35 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
com.datastax.driver.core.exceptions.InvalidTypeException: Invalid 64-bits long value, expecting 8 bytes but got 0
  at com.datastax.driver.core.TypeCodec$LongCodec.deserializeNoBoxing(TypeCodec.java:452)
  at com.datastax.driver.core.TypeCodec$DateCodec.deserialize(TypeCodec.java:826)
  at com.datastax.driver.core.TypeCodec$DateCodec.deserialize(TypeCodec.java:748)
  at com.datastax.driver.core.DataType.deserialize(DataType.java:606)
  at com.datastax.spark.connector.AbstractGettableData$.get(AbstractGettableData.scala:88)
  at org.apache.spark.sql.cassandra.CassandraSQLRow$$anonfun$fromJavaDriverRow$1.apply$mcVI$sp(CassandraSQLRow.scala:42)
  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
  at org.apache.spark.sql.cassandra.CassandraSQLRow$.fromJavaDriverRow(CassandraSQLRow.scala:41)
  at org.apache.spark.sql.cassandra.CassandraSQLRow$CassandraSQLRowReader$.read(CassandraSQLRow.scala:49)
  at org.apache.spark.sql.cassandra.CassandraSQLRow$CassandraSQLRowReader$.read(CassandraSQLRow.scala:46)
  at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:378)
  at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:378)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  at scala.collection.Iterator$$anon$13.next(Iterator.scala:372)
  at com.datastax.spark.connector.util.CountingIterator.next(CountingIterator.scala:13)
  at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366)
  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211)
  at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  at org.apache.spark.scheduler.Task.run(Task.scala:56)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)

我如何处理这样的空值,是否必须在我的SQL查询中使用某些函数将空值替换为默认值,或者我可以在脚本中使用某些方法或参数来允许spark处理这些空值?

感谢您的帮助,

最佳

尼古拉斯

1 个答案:

答案 0 :(得分:-1)

可以通过询问options来处理空值。我建议您尝试通过使用将直接使用行值的自定义函数替换脚本中的println函数来尝试。像((row) => println(row.getLong("id") + "," + row.getLongOption("ts"))

这样的东西