检测到Spark-Kafka无效依赖关系

时间:2018-09-16 14:27:58

标签: scala apache-spark intellij-idea apache-kafka sbt

我有一个基本的Spark-Kafka代码,我尝试运行以下代码:

for i in 0 .. l_day_count loop
  -- inner BEGIN-EXCEPTION-END block starts here ...
  begin
    l_day_number := to_number(to_char(...));

    if l_day_number = 1 ...
       select ... into l_dummy ...
    end if;

    if l_day_number = 2 ...
       select ... into l_dummy ...
    end if;
  exception
    when no_data_found then
      null;     --> don't RETURN null; not just yet, if you want to check all FOR loop indexes
  end;
  -- ... and ends here
end loop;

我正在使用IntelliJ IDE,并通过使用sbt创建scala项目。 build.sbt文件的详细信息如下:

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel

import java.util.regex.Pattern
import java.util.regex.Matcher
import org.apache.spark.streaming.kafka._
import kafka.serializer.StringDecoder
import Utilities._
object WordCount {
  def main(args: Array[String]): Unit = {

    val ssc = new StreamingContext("local[*]", "KafkaExample", Seconds(1))

    setupLogging()

    // Construct a regular expression (regex) to extract fields from raw Apache log lines
    val pattern = apacheLogPattern()

    // hostname:port for Kafka brokers, not Zookeeper
    val kafkaParams = Map("metadata.broker.list" -> "localhost:9092")
    // List of topics you want to listen for from Kafka
    val topics = List("testLogs").toSet
    // Create our Kafka stream, which will contain (topic,message) pairs. We tack a
    // map(_._2) at the end in order to only get the messages, which contain individual
    // lines of data.
    val lines = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
      ssc, kafkaParams, topics).map(_._2)

    // Extract the request field from each log line
    val requests = lines.map(x => {val matcher:Matcher = pattern.matcher(x); if (matcher.matches()) matcher.group(5)})

    // Extract the URL from the request
    val urls = requests.map(x => {val arr = x.toString().split(" "); if (arr.size == 3) arr(1) else "[error]"})

    // Reduce by URL over a 5-minute window sliding every second
    val urlCounts = urls.map(x => (x, 1)).reduceByKeyAndWindow(_ + _, _ - _, Seconds(300), Seconds(1))

    // Sort and print the results
    val sortedResults = urlCounts.transform(rdd => rdd.sortBy(x => x._2, false))
    sortedResults.print()

    // Kick it off
    ssc.checkpoint("/home/")
    ssc.start()
    ssc.awaitTermination()

  }


}

但是,当我尝试构建代码时,它会产生以下错误:

错误:scalac:加载类文件“ StreamingContext.class”时检测到缺少或无效的依赖项。 无法访问org.apache.spark包中的登录类型, 因为它(或其依赖项)丢失了。检查您的构建定义 缺少或冲突的依赖项。 (使用name := "Sample" version := "1.0" organization := "com.sundogsoftware" scalaVersion := "2.11.8" libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "2.2.0" % "provided", "org.apache.spark" %% "spark-streaming" % "1.4.1", "org.apache.spark" %% "spark-streaming-kafka" % "1.4.1", "org.apache.hadoop" % "hadoop-hdfs" % "2.6.0" ) 重新运行以查看有问题的类路径。) 如果针对不兼容版本的org.apache.spark编译了StreamingContext.class,则完全重建可能会有所帮助。

错误:scalac:加载类文件“ DStream.class”时检测到缺少或无效的依赖项。 无法访问org.apache.spark包中的登录类型, 因为它(或其依赖项)丢失了。检查您的构建定义 缺少或冲突的依赖项。 (使用-Ylog-classpath重新运行以查看有问题的类路径。) 如果针对不兼容版本的org.apache.spark编译了“ DStream.class”,则完全重建可能会有所帮助。

1 个答案:

答案 0 :(得分:1)

一起使用不同的Spark库时,所有库的版本应始终匹配。

此外,您使用的kafka的版本也很重要,因此应例如:spark-streaming-kafka-0-10_2.11

...
scalaVersion := "2.11.8"
val sparkVersion = "2.2.0"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % sparkVersion % "provided",
  "org.apache.spark" %% "spark-streaming" % sparkVersion,
  "org.apache.spark" %% "spark-streaming-kafka-0-10_2.11" % sparkVersion,
  "org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"

这是一个有用的网站,如果您需要检查应使用的确切依赖项: https://search.maven.org/