许多流媒体源的检查点

时间:2019-02-26 15:10:55

标签: scala apache-spark apache-spark-sql apache-zeppelin spark-structured-streaming

我正与齐柏林飞艇一起工作,我像这样在火花流中从许多来源读取了许多文件:

    val var1 = spark
  .readStream
  .schema(var1_raw)      
  .option("sep", ",")  
  .option("mode", "PERMISSIVE")  
  .option("maxFilesPerTrigger", 100)
  .option("treatEmptyValuesAsNulls", "true")
  .option("newFilesOnly", "true") 
  .csv(path_var1 )    


val chekpoint_var1 =  var1
    .writeStream
    .format("csv") 
    .option("checkpointLocation", path_checkpoint_var1) 
    .option("Path",path_checkpoint )  
    .option("header", true)  
    .outputMode("Append")
    .queryName("var1_backup")
    .start().awaitTermination()


val var2 = spark
    .readStream
  .schema(var2_raw)      
  .option("sep", ",")  
  .option("mode", "PERMISSIVE")  //
  .option("maxFilesPerTrigger", 100)
  .option("treatEmptyValuesAsNulls", "true") 
  .option("newFilesOnly", "true") 
  .csv(path_var2 )   

val chekpoint_var2 =  var2
    .writeStream
    .format("csv") 
    .option("checkpointLocation", path_checkpoint_var2)   //
    .option("path",path_checkpoint_2 )  
    .option("header", true)  
    .outputMode("Append")
    .queryName("var2_backup")
    .start().awaitTermination()

当我重新运行作业时,收到以下消息: java.lang.IllegalArgumentException:无法启动名称为var1_backup的查询,因为具有该名称的查询已经处于活动状态

*****************解决方案*******************

val spark = SparkSession
    .builder
    .appName("test")
    .config("spark.local", "local[*]")
    .getOrCreate()
spark.sparkContext.setCheckpointDir(path_checkpoint)

然后在数据框上调用checkpoint函数

1 个答案:

答案 0 :(得分:0)

*****************解决方案*******************

val spark = SparkSession
    .builder
    .appName("test")
    .config("spark.local", "local[*]")
    .getOrCreate()
spark.sparkContext.setCheckpointDir(path_checkpoint)

然后在数据框上调用checkpoint函数