使用来自Kafka主题中的特定分区的Spark来传输数据

时间:2018-06-07 06:19:18

标签: apache-spark apache-kafka apache-spark-sql spark-streaming kafka-consumer-api

我已经看到了与clickhere

类似的问题

但我仍然想知道来自特定分区的流数据是否不可能?我在Spark Streaming订阅方法 中使用了 Kafka Consumer Strategies。

  

ConsumerStrategies.Subscribe [String,String](主题,kafkaParams,   偏移)

这是我尝试订阅主题和分区的代码片段,

val topics = Array("cdc-classic")
val topic="cdc-classic"
val partition=2;
val offsets= 
Map(new TopicPartition(topic, partition) -> 2L)//I am not clear with this line, (I tried to set topic and partition number as 2)
val stream = KafkaUtils.createDirectStream[String, String](
      ssc,
      PreferConsistent,
      Subscribe[String, String](topics, kafkaParams,offsets))

但是当我运行此代码时,我得到以下异常,

     Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 0.0 failed 1 times, most recent failure: Lost task 5.0 in stage 0.0 (TID 5, localhost, executor driver): org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {cdc-classic-2=2}
    at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:878)
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1110)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
    at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:99)
    at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:70)
Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {cdc-classic-2=2}
    at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:878)
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:525)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1110)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
    at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:99)

P.S:cdc-classic是包含17个分区的主题名称

2 个答案:

答案 0 :(得分:3)

Kafka的分区是Spark的并行化单元。因此,即使技术上它在某种程度上是可能的,它也没有意义,因为所有数据都将由单个执行器处理。您可以简单地以KafkaConsumer

启动流程,而不是使用Spark
 String topic = "foo";
 TopicPartition partition0 = new TopicPartition(topic, 0);
 TopicPartition partition1 = new TopicPartition(topic, 1);
 consumer.assign(Arrays.asList(partition0, partition1));

https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html

如果您想从Spark自动重试中获利,您只需使用它创建一个Docker镜像,然后使用具有适当重试配置的Kubernetes启动它。

关于Spark,如果你真的想使用它,你应该检查你读取的分区的偏移量。可能你提供了一个不正确的,它会让你超出范围"偏移消息(可能从0开始?)。

答案 1 :(得分:1)

指定分区的分区号和起始偏移量以在此行中流式传输数据

Map(new TopicPartition(topic, partition) -> 2L)

其中,

  • 分区是分区号

  • 2L是指分区的起始偏移号。

然后我们可以从选定的分区流式传输数据。