当从kafka主题读取时,Spark流式作业由于阶段失败而中止

时间:2017-06-08 10:21:02

标签: scala apache-spark streaming apache-kafka spark-streaming

我是spark和kafka的新手,我正在使用spark stream来处理来自kafka主题的数据。现在,我只想在控制台中打印记录。 我在两个节点上有一个带有spark的迷你集群(scala版本2.12.2和spark-2.1.1)和一个带kafka的节点(版本kafka_2.11-0.10.2.0)。 但是,当我提交我的代码时,我收到此错误:

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 1.3.64.64, executor 1): java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
    at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.<init>(KafkaRDD.scala:193)
    at org.apache.spark.streaming.kafka010.KafkaRDD.compute(KafkaRDD.scala:185)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

是否与版本有关?或许我的代码不正确!

这是我的代码:

import java.util.UUID
import org.apache.kafka.clients.consumer.ConsumerRecord
import runtime.ScalaRunTime.stringOf
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe


object followProduction {

def main(args: Array[String]) = {

val sparkConf = new SparkConf().setMaster("spark://<real adress here : 10. ...>:7077").setAppName("followProcess")
val streamContext = new StreamingContext(sparkConf, Seconds(2))

streamContext.checkpoint("checkpoint")

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "1.3.64.66:9094",
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> s"${UUID.randomUUID().toString}",
  "auto.offset.reset" -> "earliest",
  "enable.auto.commit" -> (false: java.lang.Boolean)
)

val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
  streamContext,
  PreferConsistent,
  Subscribe[String, String](topics, kafkaParams)
)

stream.print()

//stream.map(record => (record.key, record.value)).count().print()

streamContext.start()
streamContext.awaitTermination()
}
}

这是我建造的:

name := "test"
version := "1.0"
scalaVersion := "2.12.2"

libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "2.1.1" %"provided"
libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "2.1.1" %"provided"
libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-10_2.10" % "2.0.0"

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

任何帮助将不胜感激,谢谢您的时间。

2 个答案:

答案 0 :(得分:2)

Spark 2.1.x是针对Scala 2.11而不是2.12编译的。

尝试:

scalaVersion := 2.11.11

任何2.11.x版本都可以使用。

此外,当您需要2.11时,您的Kafka流式依赖性指的是Scala 2.10:

libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.1.1"

答案 1 :(得分:0)

除了您的版本不匹配之外,我认为您正在运行Spark Cluster,您需要将所有JARS(libs)从您运行Spark的实际应用程序提交给Spark从机(节点)驱动程序。

您可以使用jars方法SparkConf提交.setJars(libs)

像这样的东西

lazy val conf: SparkConf = new SparkConf()
    .setMaster(sparkMaster)
    .setAppName(sparkAppName)
    .set("spark.app.id", sparkAppId)
    .set("spark.submit.deployMode", "cluster")
    .setJars(libs) //setting jars for sparkContext

注意: libs: Seq[String],即库路径序列