为什么我无法使用pyspark连接到kafka? Kafka_2.12-2.3.0和Spark_2.4.4或2.3.0或2.3.4

时间:2019-10-23 09:24:58

标签: apache-spark pyspark apache-kafka pyspark-sql spark-structured-streaming

我无法使用下面的python代码从spark_2.4.4结构化流连接到kafka_2.12-2.3.0。我的Scala版本是2.11.12,OpenJDK是1.8.0_222

from pyspark.sql import SparkSession
spark = SparkSession\
 .builder\
 .appName("kafka-spark-structured-stream")\
 .getOrCreate()

dsraw = spark\
 .readStream\
 .format("kafka")\
 .option("kafka.bootstrap.servers", "**kafka-broker-ID**:9092")\
 .option("subscribe", "test")\
 .option("startingOffsets", "earliest")\
 .load()

以下是我通过更改版本(例如从2.11更改为2.12)多次尝试的spark-submit,但仍然失败:

$spark-submit --jars /opt/hadoop/spark/jars/spark-sql-kafka-0-10_2.11-2.4.4.jar,/opt/hadoop/spark/jars/kafka-clients-0.10.1.0.jar --master yarn --deploy-mode client /opt/hadoop/spark/spark-application/main/kafka-spark-structured-stream.py

$spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.4 --master yarn --deploy-mode client /opt/hadoop/spark/spark-application/main/kafka-spark-structured-stream.py

无论我如何通过spark-submit尝试不同的方式,我都会不断得到以下错误:

2019-10-23 15:40:37,096 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@cf7aac8{/SQL/execution,null,AVAILABLE,@Spark}
2019-10-23 15:40:37,096 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution/json.
2019-10-23 15:40:37,097 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5c593907{/SQL/execution/json,null,AVAILABLE,@Spark}
2019-10-23 15:40:37,118 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /static/sql.
2019-10-23 15:40:37,120 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@38634422{/static/sql,null,AVAILABLE,@Spark}
2019-10-23 15:40:40,573 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
check_1======check_1======check_1======check_1======check_1======check_1======check_1======check_1======check_1======check_1======
Traceback (most recent call last):
  File "/opt/hadoop/spark/spark-application/main/test.py", line 15, in <module>
    .option("startingOffsets", "earliest").load()
  File "/opt/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 172, in load
  File "/opt/hadoop/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/opt/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/opt/hadoop/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o36.load.
: java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.kafka010.KafkaSourceProvider could not be instantiated
        at java.util.ServiceLoader.fail(ServiceLoader.java:232)
        at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
        at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
        at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
        at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
        at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
        at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
        at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
        at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:630)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: org.apache.spark.internal.Logging.$init$(Lorg/apache/spark/internal/Logging;)V
        at org.apache.spark.sql.kafka010.KafkaSourceProvider.<init>(KafkaSourceProvider.scala:44)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at java.lang.Class.newInstance(Class.java:442)
        at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
        ... 24 more


执行spark-submit --version也可以得到以下版本:

(base) [hadoop@master ~]$ spark-submit --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.4
      /_/

Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_222
Branch
Compiled by user  on 2019-08-27T21:21:38Z
Revision
Url
Type --help for more information.

2 个答案:

答案 0 :(得分:1)

我最终通过降级到特定的Spark版本2.4.0解决了它。这是我使用的版本:

spark=2.4.0
kafka=2.12-2.3.0
scala=2.11.12
openJDK=1.8.0_222

这里是spark-submit

spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0,org.apache.kafka:kafka-clients:2.3.0 --master yarn --deploy-mode client /opt/hadoop/spark/spark-application/main/kafka-spark-structured-stream.py

答案 1 :(得分:0)

这可能是由于您的应用程序依赖性所致;我认为Kafka客户端与您使用的Spark版本之间存在不兼容...

我在使用Scala时遇到了同样的错误,我通过降级到Spark 2.3而不是2.4解决了它。