Spark SQL - 如何从纪元中选择存储为UTC millis的日期?

时间:2014-10-29 19:59:23

标签: sql date apache-spark apache-spark-sql

我一直在搜索并且没有找到解决方案,如何使用Spark SQL查询从纪元保存为UTC毫秒的日期。我从NoSQL数据源(MongoDB的JSON)中提取的模式的目标日期为:

|-- dateCreated: struct (nullable = true)

||-- $date: long (nullable = true)

完整的架构如下:

scala> accEvt.printSchema
root
 |-- _id: struct (nullable = true)
 |    |-- $oid: string (nullable = true)
 |-- appId: integer (nullable = true)
 |-- cId: long (nullable = true)
 |-- data: struct (nullable = true)
 |    |-- expires: struct (nullable = true)
 |    |    |-- $date: long (nullable = true)
 |    |-- metadata: struct (nullable = true)
 |    |    |-- another key: string (nullable = true)
 |    |    |-- class: string (nullable = true)
 |    |    |-- field: string (nullable = true)
 |    |    |-- flavors: string (nullable = true)
 |    |    |-- foo: string (nullable = true)
 |    |    |-- location1: string (nullable = true)
 |    |    |-- location2: string (nullable = true)
 |    |    |-- test: string (nullable = true)
 |    |    |-- testKey: string (nullable = true)
 |    |    |-- testKey2: string (nullable = true)
 |-- dateCreated: struct (nullable = true)
 |    |-- $date: long (nullable = true)
 |-- id: integer (nullable = true)
 |-- originationDate: struct (nullable = true)
 |    |-- $date: long (nullable = true)
 |-- processedDate: struct (nullable = true)
 |    |-- $date: long (nullable = true)
 |-- receivedDate: struct (nullable = true)
 |    |-- $date: long (nullable = true)

我的目标是按照以下方式编写查询:

SELECT COUNT(*) FROM myTable WHERE dateCreated BETWEEN [dateStoredAsLong0] AND [dateStoredAsLong1]

到目前为止,我的过程一直是:

scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@29200d25

scala> val accEvt = sqlContext.jsonFile("/home/bkarels/mongoexport/accomplishment_event.json")

...
14/10/29 15:03:38 INFO SparkContext: Job finished: reduce at JsonRDD.scala:46, took 4.668981083 s
accEvt: org.apache.spark.sql.SchemaRDD = 
SchemaRDD[6] at RDD at SchemaRDD.scala:103

scala> accEvt.registerAsTable("accomplishmentEvent")

(此时以下基线查询成功执行)

scala> sqlContext.sql("select count(*) from accomplishmentEvent").collect.foreach(println)
...
[74475]

现在,我无法纠正的巫术是如何形成我的选择陈述以推断日期。例如,以下执行w / o错误,但返回零而不是所有记录的计数(74475)。

scala> sqlContext.sql("select count(*) from accomplishmentEvent where processedDate >= '1970-01-01'").collect.foreach(println)
...
[0]

我也尝试过一些丑陋的事情:

scala> val now = new java.util.Date()
now: java.util.Date = Wed Oct 29 15:05:15 CDT 2014

scala> val today = now.getTime
today: Long = 1414613115743

scala> val thirtydaysago = today - (30 * 24 * 60 * 60 * 1000)
thirtydaysago: Long = 1416316083039


scala> sqlContext.sql("select count(*) from accomplishmentEvent where processedDate <= %s and processedDate >= %s".format(today,thirtydaysago)).collect.foreach(println)

根据建议,我已在指定字段中选择以确保其有效。所以:

scala> sqlContext.sql("select receivedDate from accomplishmentEvent limit 10").collect.foreach(println)

返回:

[[1376318850033]]
[[1376319429590]]
[[1376320804289]]
[[1376320832835]]
[[1376320832960]]
[[1376320835554]]
[[1376320914480]]
[[1376321041899]]
[[1376321109341]]
[[1376321121469]]

然后延伸试图获得某种日期,我尝试过:

scala> sqlContext.sql("select cId from accomplishmentEvent where receivedDate.date > '1970-01-01' limit 5").collect.foreach(println)

导致错误:

java.lang.RuntimeException: No such field date in StructType(ArrayBuffer(StructField($date,LongType,true)))
...

使用$为我们的字段名称添加前缀也会产生不同类型的错误:

scala> sqlContext.sql("select cId from accomplishmentEvent where receivedDate.$date > '1970-01-01' limit 5").collect.foreach(println)
java.lang.RuntimeException: [1.69] failure: ``UNION'' expected but ErrorToken(illegal character) found

select actualConsumerId from accomplishmentEvent where receivedDate.$date > '1970-01-01' limit 5

显然,我不知道如何选择以这种方式存储的日期 - 任何人都可以帮助我填补这个空白吗?

我对Scala和Spark都比较新,所以如果这是一个基本问题,请原谅我,但我的搜索在论坛和Spark文档中都显示为空。

谢谢。

1 个答案:

答案 0 :(得分:1)

您的JSON不平坦,因此需要使用限定名称(例如dateCreated.$date)来解决顶层以下的字段。您的特定日期字段均为long类型,因此您需要对它们进行数值比较,看起来您正在进行这些操作。

另一个问题是你的字段名称有&#34; $&#34;字符,Spark SQL不会让你查询它们。一种解决方案是,不是直接以SchemaRDD(正如您所做)读取JSON,而是首先将其作为RDD[String]读取,使用map方法执行Scala字符串操作您的选择,然后使用SQLContext&#39; jsonRDD方法创建SchemaRDD

val lines = sc.textFile(...)
// you may want something less naive than global replacement of all "$" chars
val linesFixed = lines.map(s => s.replaceAllLiterally("$", ""))
val accEvt = sqlContext.jsonRDD(linesFixed)

我已经使用Spark 1.1.0对此进行了测试。

作为参考,我在this bug report和其他人可能已经注意到Spark SQL中缺少引用功能,而且最近修复程序似乎checked in,但需要一些时间才能进入发布