Spark-流式数据帧/数据集不支持基于非时间的窗口;

时间:2018-11-14 07:09:42

标签: java apache-spark apache-spark-sql spark-streaming

我需要编写带有内部select和partition by的Spark sql查询。问题是我有AnalysisException。 我已经花了几个小时,但是用其他方法却没有成功。

例外:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets;;
Window [sum(cast(_w0#41 as bigint)) windowspecdefinition(deviceId#28, timestamp#30 ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS grp#34L], [deviceId#28], [timestamp#30 ASC NULLS FIRST]
+- Project [currentTemperature#27, deviceId#28, status#29, timestamp#30, wantedTemperature#31, CASE WHEN (status#29 = cast(false as boolean)) THEN 1 ELSE 0 END AS _w0#41]

我认为这是太复杂的查询,无法像这样实现。但是我不知道要修复它。

 SparkSession spark = SparkUtils.getSparkSession("RawModel");

 Dataset<RawModel> datasetMap = readFromKafka(spark);

 datasetMap.registerTempTable("test");

 Dataset<Row> res = datasetMap.sqlContext().sql("" +
                " select deviceId, grp, avg(currentTemperature) as averageT, min(timestamp) as minTime ,max(timestamp) as maxTime, count(*) as countFrame " +
                " from (select test.*,  sum(case when status = 'false' then 1 else 0 end) over (partition by deviceId order by timestamp) as grp " +
                "  from test " +
                "  ) test " +
                " group by deviceid, grp ");

任何建议将不胜感激。 谢谢。

1 个答案:

答案 0 :(得分:0)

我认为问题出在 windowing 规范中:

over (partition by deviceId order by timestamp) 

分区必须位于基于时间的列上-在您的情况下为 timestamp 。以下应该起作用:

over (partition by timestamp order by timestamp) 

那当然不会解决查询的意图。可以尝试以下操作:但是不清楚spark是否会支持它:

over (partition by timestamp, deviceId order by timestamp) 

即使spark 支持,它仍然会更改查询的语义。

更新

这里是权威消息来源: Tathagata Das 火花流的关键/核心提交者:http://apache-spark-user-list.1001560.n3.nabble.com/Does-partition-by-and-order-by-works-only-in-stateful-case-td31816.html

enter image description here