Zeppelin 6.5 +用于Structured Streaming 2.0.2的Apache Kafka连接器

时间:2017-01-06 16:03:40

标签: streaming apache-zeppelin apache-spark-2.0 apache-kafka-connect databricks

我试图使用Kafka连接器运行一个包含spark的结构化流媒体示例的zeppelin笔记本。

>kafka is up and running on localhost port 9092 

>from zeppelin notebook, sc.version returns String = 2.0.2

这是我的环境:

kafka: kafka_2.10-0.10.1.0

zeppelin: zeppelin-0.6.2-bin-all

spark: spark-2.0.2-bin-hadoop2.7

以下是我的zeppelin笔记本中的代码:

import org.apache.enter code herespark.sql.functions.{explode, split}


// Setup connection to Kafka val kafka = spark.readStream  
.format("kafka")   
.option("kafka.bootstrap.servers","localhost:9092")   
// comma separated list of broker:host  
.option("subscribe", "twitter")    
// comma separated list of topics 
.option("startingOffsets", "latest") 
// read data from the end of the stream   .load()

以下是我在运行笔记本时遇到的错误:

  

import org.apache.spark.sql.functions。{explode,split}   java.lang.ClassNotFoundException:无法找到数据源:kafka。   请找到包裹   https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects   在   org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:148)   在   org.apache.spark.sql.execution.datasources.DataSource.providingClass $ lzycompute(DataSource.scala:79)   在   org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:79)   在   org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:218)   在   org.apache.spark.sql.execution.datasources.DataSource.sourceInfo $ lzycompute(DataSource.scala:80)   在   org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:80)   在   org.apache.spark.sql.execution.streaming.StreamingRelation $。适用(StreamingRelation.scala:30)   在   org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:124)   ... 86 elided引起:java.lang.ClassNotFoundException:   kafka.DefaultSource at   scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)   在java.lang.ClassLoader.loadClass(ClassLoader.java:424)at   java.lang.ClassLoader.loadClass(ClassLoader.java:357)at   org.apache.spark.sql.execution.datasources.DataSource $$ anonfun $ 5 $$ anonfun $ $适用1.适用(DataSource.scala:132)   在   org.apache.spark.sql.execution.datasources.DataSource $$ anonfun $ 5 $$ anonfun $ $适用1.适用(DataSource.scala:132)   在scala.util.Try $ .apply(Try.scala:192)

非常感谢任何帮助建议。

日Thnx

1 个答案:

答案 0 :(得分:1)

您可能已经想到了这一点,但为其他人提供答案,您必须将以下内容添加到zeppelin-env.sh.j2

SPARK_SUBMIT_OPTIONS=--packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.1.0

如果您使用的是kafka客户端,还可能包含其他依赖项:

--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.0,org.apache.spark:spark-sql_2.11:2.1.0,org.apache.kafka:kafka_2.11:0.10.0.1,org.apache.spark:spark-streaming-kafka-0-10_2.11:2.1.0,org.apache.kafka:kafka-clients:0.10.0.1