例外:不支持完整输出模式

时间:2021-02-23 13:45:54

标签: java apache-spark-sql spark-structured-streaming

我为我的教程创建了 sparkStreaming Simulation。当我执行 outputMode(“完成”)操作时,出现错误。

错误:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Complete output mode not supported when there are no streaming aggregations on streaming DataFrames/Datasets;

我的数据集示例:

2006-04-01 00:00:00.000 +0200,Partly Cloudy,rain,9.472222222222221,7.3888888888888875,0.89,14.1197,251.0,15.826300000000002,0.0,1015.13,Partly cloudy throughout the day.

第一个进程代码(Partition(summary)):

System.setProperty("hadoop.home.dir","C:\\hadoop-common-2.2.0-bin-master");
SparkSession sparkSession = SparkSession.builder()
                .appName("SparkStreamingMessageListener")
                .master("local")
                .getOrCreate();
StructType schema = new StructType()
                .add("Formatted Date", "String")
                .add("Summary","String")
                .add("Precip Type", "String")
                .add("Temperature", "Double")
                .add("Apparent Temperature", "Double")
                .add("Humidity","Double")
                .add("Wind Speed (km/h)","Double")
                .add("Wind Bearing (degrees)","Double")
                .add("Visibility (km)","Double")
                .add("Loud Cover","Double")
                .add("Pressure(milibars)","Double")
                .add("Dailiy Summary","String");
Dataset<Row> formatted_date = sparkSessionDataFrame.read().schema(schema).option("header", true).csv("C:\\Users\\Kaan\\Desktop\\Kaan Proje\\SparkStreamingListener\\archivecsv\\weatherHistory.csv");
Dataset<Row> avg = formatted_date.groupBy("Summary", "Precip Type").avg("Temperature").sort(functions.desc("avg(Temperature)"));
formatted_date.write().partitionBy("Summary").csv("C:\\Users\\Kaan\\Desktop\\Kaan Proje\\SparkStreamingListener\\archivecsv\\weatherHistoryFile\\");

第二个监听器进程代码:

SparkSession sparkSession = SparkSession.builder()
                .appName("SparkStreamingMessageListener1")
                .master("local")
                .getOrCreate();
StructType schema1 = new StructType()
                .add("Formatted Date", "String")
                .add("Precip Type", "String")
                .add("Temperature", "Double")
                .add("Apparent Temperature", "Double")
                .add("Humidity","Double")
                .add("Wind Speed (km/h)","Double")
                .add("Wind Bearing (degrees)","Double")
                .add("Visibility (km)","Double")
                .add("Loud Cover","Double")
                .add("Pressure(milibars)","Double")
                .add("Dailiy Summary","String");
 Dataset<Row> rawData = sparkSession.readStream().schema(schema1).option("sep", ",").csv("C:\\Users\\Kaan\\Desktop\\Kaan Proje\\sparkStreamingWheather\\*");
        Dataset<Row> heatData = rawData.select("Temperature", "Precip Type").where("Temperature>10");
        StreamingQuery start = heatData.writeStream().outputMode("complete").format("console").start();
        start.awaitTermination();

我通过将分区文件复制到指定的侦听器文件路径来创建流模拟。 如果你能帮忙我会很高兴的。谢谢。

1 个答案:

答案 0 :(得分:1)

该错误非常具体地说明了实际问题是什么:您的查询类型不支持输出模式完整。

OutputeModes 上的结构化流媒体指南所述:

<块引用>

“不支持完全模式,因为在结果表中保留所有未聚合的数据是不可行的。”

选择追加模式时会解决这个问题:

StreamingQuery start = heatData.writeStream().outputMode(cappend").format("console").start()
相关问题