解决Spark Streaming中的以下错误

时间:2018-09-17 10:08:55

标签: apache-spark spark-streaming

当我尝试将数据写入HDFS时出现此错误。这项工作正常,并且出现此错误。很明显,这里存在数据问题。

Experience.findAll({
    include: [{
        model: ExperienceTag,
        as: 'Experiences', // <---- HERE
        where: { tagId: 44 },
    }],
})

这是否意味着我的输出在DStream中没有任何内容?以下是我将DStream写入HDFS的代码

18/09/15 04:13:43 ERROR JobScheduler: Error running job streaming job 1536977640000 ms.0
java.util.NoSuchElementException: None.get
    at scala.None$.get(Option.scala:347)
    at scala.None$.get(Option.scala:345)
    at org.apache.spark.sql.execution.command.DataWritingCommand$class.metrics(DataWritingCommand.scala:49)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.metrics$lzycompute(InsertIntoHadoopFsRelationCommand.scala:46)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.metrics(InsertIntoHadoopFsRelationCommand.scala:46)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.metrics$lzycompute(commands.scala:100)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.metrics(commands.scala:100)
    at org.apache.spark.sql.execution.SparkPlanInfo$.fromSparkPlan(SparkPlanInfo.scala:58)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)

0 个答案:

没有答案