在Spark流媒体中找不到KafkaUtils类

时间:2014-12-30 18:49:36

标签: sbt apache-spark apache-kafka

我刚开始使用Spark Streaming,我正在尝试构建一个计算来自Kafka流的单词的示例应用程序。虽然它与sbt package一起编译,但当我运行它时,我得到NoClassDefFoundError。这个post似乎有同样的问题,但解决方案是针对Maven而我无法用sbt重现它。

KafkaApp.scala

import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka._

object KafkaApp {
  def main(args: Array[String]) {

    val conf = new SparkConf().setAppName("kafkaApp").setMaster("local[*]")
    val ssc = new StreamingContext(conf, Seconds(1))
    val kafkaParams = Map(
        "zookeeper.connect" -> "localhost:2181",
        "zookeeper.connection.timeout.ms" -> "10000",
        "group.id" -> "sparkGroup"
    )

    val topics = Map(
        "test" -> 1
    )

    // stream of (topic, ImpressionLog)
    val messages = KafkaUtils.createStream(ssc, kafkaParams, topics, storage.StorageLevel.MEMORY_AND_DISK)
    println(s"Number of words: %{messages.count()}")
  }
}

build.sbt

name := "Simple Project"

version := "1.1"

scalaVersion := "2.10.4"

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % "1.1.1",
    "org.apache.spark" %% "spark-streaming" % "1.1.1",
    "org.apache.spark" %% "spark-streaming-kafka" % "1.1.1"
)

resolvers += "Akka Repository" at "http://repo.akka.io/releases/"

我提交的是:

bin/spark-submit \
  --class "KafkaApp" \
  --master local[4] \
  target/scala-2.10/simple-project_2.10-1.1.jar

错误:

14/12/30 19:44:57 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@192.168.5.252:65077/user/HeartbeatReceiver
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/kafka/KafkaUtils$
    at KafkaApp$.main(KafkaApp.scala:28)
    at KafkaApp.main(KafkaApp.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.streaming.kafka.KafkaUtils$
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

9 个答案:

答案 0 :(得分:17)

spark-submit不会自动放入包含KafkaUtils的包。您需要在项目JAR中拥有。为此,您需要使用sbt assembly创建一个包含所有内容的超级jar。这是一个build.sbt示例。

https://github.com/tdas/spark-streaming-external-projects/blob/master/kafka/build.sbt

您显然还需要将程序集插件添加到SBT。

https://github.com/tdas/spark-streaming-external-projects/tree/master/kafka/project

答案 1 :(得分:7)

请在提交申请时尝试包含所有依赖关系罐:

  

./ spark-submit --name" SampleApp" --deploy-mode client - master spark:// host:7077 --class com.stackexchange.SampleApp --jars $ SPARK_INSTALL_DIR / spark-streaming-kafka_2.10-1.3.0.jar,$ KAFKA_INSTALL_DIR / libs / kafka_2 .10-0.8.2.0.jar,$ KAFKA_INSTALL_DIR / libs / metrics-core-2.2.0.jar,$ KAFKA_INSTALL_DIR / libs / zkclient-0.3.jar spark-example-1.0-SNAPSHOT.jar

答案 2 :(得分:2)

以下build.sbt为我工作。它还要求您将sbt-assembly插件放在projects/目录下的文件中。

build.sbt

name := "NetworkStreaming" // https://github.com/sbt/sbt-assembly/blob/master/Migration.md#upgrading-with-bare-buildsbt

libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-streaming_2.10" % "1.4.1",
  "org.apache.spark" % "spark-streaming-kafka_2.10" % "1.4.1",         // kafka
  "org.apache.hbase" % "hbase" % "0.92.1",
  "org.apache.hadoop" % "hadoop-core" % "1.0.2",
  "org.apache.spark" % "spark-mllib_2.10" % "1.3.0"
)

mergeStrategy in assembly := {
  case m if m.toLowerCase.endsWith("manifest.mf")          => MergeStrategy.discard
  case m if m.toLowerCase.matches("meta-inf.*\\.sf$")      => MergeStrategy.discard
  case "log4j.properties"                                  => MergeStrategy.discard
  case m if m.toLowerCase.startsWith("meta-inf/services/") => MergeStrategy.filterDistinctLines
  case "reference.conf"                                    => MergeStrategy.concat
  case _                                                   => MergeStrategy.first
}

项目/ plugins.sbt

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.1")

答案 3 :(得分:0)

遇到同样的问题,我通过构建带有依赖关系的jar来解决它。

将以下代码添加到pom.xml

<build>
    <sourceDirectory>src/main/java</sourceDirectory>
    <testSourceDirectory>src/test/java</testSourceDirectory>
    <plugins>
      <!--
                   Bind the maven-assembly-plugin to the package phase
        this will create a jar file without the storm dependencies
        suitable for deployment to a cluster.
       -->
      <plugin>
        <artifactId>maven-assembly-plugin</artifactId>
        <configuration>
          <descriptorRefs>
            <descriptorRef>jar-with-dependencies</descriptorRef>
          </descriptorRefs>
          <archive>
            <manifest>
              <mainClass></mainClass>
            </manifest>
          </archive>
        </configuration>
        <executions>
          <execution>
            <id>make-assembly</id>
            <phase>package</phase>
            <goals>
              <goal>single</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
</build>    

mvn包 提交&#34; example-jar-with-dependencies.jar&#34;

答案 4 :(得分:0)

在外部添加了依赖项,项目 - &gt;属性 - &gt; java构建路径 - &gt;库 - &gt;添加外部罐子并添加所需的罐子。

这解决了我的问题。

答案 5 :(得分:0)

使用Spark 1.6为我做的工作没有处理这么多外部罐子的麻烦......管理起来会变得非常复杂......

答案 6 :(得分:0)

你也可以下载jar文件并把它放在Spark lib文件夹中,因为它没有安装Spark,而不是试图让SBT build.sbt下功来。

http://central.maven.org/maven2/org/apache/spark/spark-streaming-kafka-0-10_2.10/2.1.1/spark-streaming-kafka-0-10_2.10-2.1.1.jar

将其复制到:

/usr/local/spark/spark-2.1.0-bin-hadoop2.6/jars /

答案 7 :(得分:0)

import org.apache.spark.streaming.kafka.KafkaUtils

在build.sbt中使用以下内容


name := "kafka"

version := "0.1"

scalaVersion := "2.11.12"

retrieveManaged := true

fork := true

//libraryDependencies += "org.apache.spark" % "spark-streaming_2.11" % "2.2.0"
//libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % "2.1.0"

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0"

//libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.2.0"

libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.2.0"

// https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-8
libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka-0-8" % "2.2.0" % "provided"

// https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-8-assembly
libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka-0-8-assembly" % "2.2.0"

这将解决问题

答案 8 :(得分:0)

--packages上使用spark-submit参数,它将以group:artifact:version,...格式使用mvn软件包