我的Spark流媒体应用正在使用DStream方法从kafka读取数据,我正在尝试使批处理大小在10秒内处理60,000条消息。
我做了什么
现在我要如何测试它是否正常工作。
我有一个生产者,可以一次向该主题发送60,000条消息。当我检查spark UI时,得到以下信息:
因此,每个批处理时间相隔10秒。我期望的是1批具有60,000条记录的记录。我没有设置其他参数吗?从我对当前设置的了解中,我应该一次获得10 * 60,000 * 3 = 1800000。
spark.app.id = application_1551747423133_0677
spark.app.name = KafkaCallDEV
spark.driver.cores = 2
spark.driver.extraJavaOptions = -XX:+ UseG1GC -XX:ConcGCThreads = 2 -XX:InitiatingHeapOccupancyPercent = 35 -Dlog4j.configuration = log4j.properties -verbose:gc
spark.driver.memory = 3g
spark.driver.port = 33917
spark.executor.cores = 2
spark.executor.extraJavaOptions = -XX:+ UseG1GC -XX:ConcGCThreads = 2 -XX:InitiatingHeapOccupancyPercent = 35 -Dlog4j.configuration = log4j.properties -verbose:gc
spark.executor.id =驱动程序
spark.executor.instances = 2
spark.executor.memory = 2g
spark.master =纱线
spark.scheduler.mode = FIFO
spark.streaming.backpressure.enabled = true
spark.streaming.kafka.maxRatePerPartition = 60000
spark.submit.deployMode =集群
spark.ui.filters = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
spark.ui.port = 0
spark.yarn.app.container.log.dir = / data0 / yarn / container-logs / application_1551747423133_0677 / container_1551747423133_0677_01_000002
以下是我使用
打印的内容logger.info(sparkSession.sparkContext.getConf.getAll.mkString(“ \ n”))
我删除了一些不必要的日志,例如服务器地址,应用名称等。
(spark.executor.extraJavaOptions,-XX:+ UseG1GC -XX:ConcGCThreads = 2
-XX:InitiatingHeapOccupancyPercent = 35 -Dlog4j.configuration = log4j.properties -verbose:gc)(spark.yarn.app.id,application_1551747423133_0681)
(spark.submit.deployMode,集群)
(spark.streaming.backpressure.enabled,true)
(spark.yarn.credentials.renewalTime,1562764821939ms)
(spark.ui.filters,org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter)
(spark.executor.memory,2g)
(spark.yarn.credentials.updateTime,1562769141873ms)
(spark.driver.cores,2)
(spark.executor.id,驱动程序)
(spark.executor.cores,2)
(spark.master,yarn)
(spark.driver.memory,3g)
(spark.sql.warehouse.dir,/ user / hive / warehouse)
(spark.ui.port,0)
(spark.driver.extraJavaOptions,-XX:+ UseG1GC -XX:ConcGCThreads = 2 -XX:InitiatingHeapOccupancyPercent = 35 -Dlog4j.configuration = log4j.properties -verbose:gc)
(spark.executor.instances,2)
(spark.driver.port,37375)
我也有一些正在打印的Kafka配置文件,所以我也将其张贴在下面。
org.apache.kafka.clients.consumer.ConsumerConfig:178 - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 60000
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
retry.backoff.ms = 100
ssl.secure.random.implementation = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
答案 0 :(得分:3)
spark.streaming.kafka.maxRatePerPartition = 60000 表示
每个Kafka的最大速率(以每秒的消息数为单位) 分区将通过此直接API读取,该API将通过属性
启用spark.streaming.backpressure.enabled = true
17610 + 32790 + 9600 = 60000 。
请参见this
您的3个kafka分区(具有60k消息)由spark在块/火花分区中读取,在您的情况下,是从spark中读取3个分区。但3个kafka分区中的原始邮件数为60000(17610 + 32790 + 9600)。即便是高消息速率的输入流又回来了,压力也将使用RateLimiter和PIDRateEstimator
来保持消息的均匀速率所以您在这里完成了......
进一步推荐我的post -Short note on Spark Streaming Back Pressure for better understanding
结论: 如果启用背压,则无论您以何种速率发送消息。它将允许恒定速率的邮件
像这个示例性的示例例子...背压特性类似于流入控制-压力调节螺钉以保持消息流的均匀速率。
答案 1 :(得分:0)
因此,我找到了Spark将我发送的记录分成多批的原因。我有spark.streaming.backpressure.enabled = true
。这使用来自先前批次的反馈循环来控制接收速率,该接收速率以我在spark.streaming.kafka.maxRatePerPartition
中设置的每个分区的最大速率为上限。因此spark正在为我调整接收速率。