嗨,我是Spark和Kafka的新手,我正在编写示例代码来使用Spark使用来自Kafka主题的消息,
object Init {
def main(args: Array[String]): Unit = {
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "ip-10-0-1-10.ec2.internal:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "12",
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("TestLogs")
val stream = KafkaUtils.createDirectStream[String, String](
SparkConfig.streamContext,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
stream.print()
SparkConfig.streamContext.start()
SparkConfig.streamContext.awaitTermination()
}
}
当我使用
在群集上运行以上代码时"spark2-submit --jars spark-streaming-kafka-0-10_2.11-2.3.0.jar --class Init
--master local kafkademo_2.11-0.1.jar"
消费者进入无限循环,不打印任何消息,我通过Ctrl + C明确终止进程
INFO consumer.ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [ip-10-0-1-10.ec2.internal:9092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id = consumer-1
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = 12
retry.backoff.ms = 100
ssl.secure.random.implementation = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
11/09/11 07:03:05 INFO utils.AppInfoParser:Kafka版本:0.10.0-kafka-2.1.0 18/09/11 07:03:05 INFO utils.AppInfoParser:Kafka commitId:未知
如果我使用控制台用户进行测试,它将显示消息,
kafka-console-consumer --zookeeper ip-10-0-2-11.ec2.internal:2181 --topic
TestLogs --from-beginning
INFO consumer.ConsumerFetcherManager: [ConsumerFetcherManager-1536650065051]
Added fetcher for partitions ArrayBuffer([TestLogs-0, initOffset -1 to broker
BrokerEndPoint(177,ip-10-0-1-10.ec2.internal,9092)] )
Welcome to Kafka APIS
请帮助我解决此问题。
答案 0 :(得分:1)
由于依赖性问题,我无法重现您遇到的问题,但是我可以提供示例工作代码,您可以在其中收听任何主题。
import kafka.serializer.{DefaultDecoder, StringDecoder}
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark._
import org.apache.spark.streaming.{Seconds, StreamingContext}
object KafkaStreamingConsumer {
def main(args: Array[String])
{
val sparkConf = new SparkConf().setAppName("KafkaStreaming").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
sc.setLogLevel("ERROR")
val ssc = new StreamingContext(sc, Seconds(10))
val kafkaConf = Map(
"metadata.broker.list" -> "ip-10-0-1-10.ec2.internal:9092",
"zookeeper.connect" -> "localhost:2181",
"group.id" -> "kafkaSparkStreaming",
"zookeeper.connection.timeout.ms" -> "1000"
)
val message = KafkaUtils.createStream[Array[Byte], String, DefaultDecoder, StringDecoder](
ssc,
kafkaConf,
Map("TestLogs" ->1),
StorageLevel.MEMORY_ONLY
)
val lines = message.map(_._2)
lines.print()
ssc.start()
ssc.awaitTermination()
}
}
我正在使用Spark Streaming库从Kafka主题流式传输数据。如果您在使用上述代码时发现任何问题,请告诉我。
答案 1 :(得分:1)
首先确保主题TestLogs包含数据,然后,如果您已经使用了组ID为12的消息,那么您将仅收到尚未针对该特定组ID提交的新消息:在这种情况下,您可以重播该主题通过重置该组的kafka偏移量或简单地更改组ID(例如13)。