我目前正在从事Spark结构化流媒体作业,似乎在每个批处理间隔上都会收到警告:
WARN HDFSBackedStateStoreProvider: The state for version N doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query.
-每批N递增。
在本地模式(禁用检查点)和运行YARN(EMR)的日志中,我都看到了这种情况。
问题是:这可以安全地忽略吗?打开调试日志记录HDFSBackedStateStoreProvider表示花了一些时间来读取快照和增量文件,因此我有些担心。
这是我看似最小的SparkConf
val sparkConf: SparkConf = {
val conf = new SparkConf()
.setAppName("Structured Streaming")
.set("spark.sql.autoBroadcastJoinThreshold", "-1")
.set("spark.speculation", "false")
if (App.isLocal)
conf
.set("spark.cassandra.output.consistency.level", "LOCAL_ONE")
.setMaster("local[*]")
else
conf
.set("spark.cassandra.connection.host", PropertyLoader.getProperty("cassandra.contactPoints"))
.set("spark.cassandra.connection.local_dc", PropertyLoader.getProperty("cassandra.localDC"))
.set("spark.cassandra.connection.ssl.enabled", "true")
.set("spark.cassandra.connection.ssl.trustStore.path", PropertyLoader.truststorePath)
.set("spark.cassandra.connection.ssl.trustStore.password", PropertyLoader.getProperty("cassandra.truststorePassword"))
.set("spark.cassandra.auth.username", PropertyLoader.getProperty("cassandra.username"))
.set("spark.cassandra.auth.password", PropertyLoader.getProperty("cassandra.password"))
.set("spark.executor.logs.rolling.maxRetainedFiles", "20")
.set("spark.executor.logs.rolling.maxSize", "524288000")
.set("spark.executor.logs.rolling.strategy", "size")
.set("spark.cleaner.referenceTracking.cleanCheckpoints", "true")
.set("spark.sql.streaming.metricsEnabled", "true")
.setJars(Array[String](SparkContext.jarOfClass(getClass).get))