Spark流媒体,Kafka和多个主题的表现不佳

时间:2017-10-15 21:39:15

标签: apache-spark apache-kafka spark-streaming

Spark 2.1 + Kafka 0.10 + Spark流媒体。

批处理持续时间为30秒。

我有13个节点,2个代理,每个主题/分区每个执行器使用1个核心 LocationStrategy是PreferConsistent 当消耗1个主题时,没有问题执行者总是处理相同的主题/分区(测试直到24个分区) 当我添加另一个主题时,用于处理主题/分区的一些执行程序会从一个批处理更改为另一个批处理。

当执行程序再次处理相同的主题/分区时(例如,之后的3个批处理,因此在上一个处理之后的1:30),由于请求超时(request.timeout.ms参数),我得到了我的KafkaConsumer的断开连接经纪人然后我在40s期间阻止了对Kafka的新fetch查询(再次请求request.timeout.ms参数)。

2017-10-09 16:51:30.336 DEBUG    [Executor task launch worker for task 315]:org.apache.spark.internal.Logging$class - Seeking to topic2-7 136136613
2017-10-09 16:51:30.336 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.KafkaConsumer - Seeking to offset 136136613 for partition topic2-7
2017-10-09 16:51:30.337 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.NetworkClient - Disconnecting from node 1005 due to request timeout.
2017-10-09 16:51:30.337 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler - Cancelled FETCH request ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@30ea3352, request=RequestSend(header={api_key=1,api_version=2,correlation_id=25,client_id=consumer-1}, body={replica_id=-1,max_wait_time=500,min_bytes=1,topics=[{topic=topic2,partitions=[{partition=7,fetch_offset=136125064,max_bytes=1048576}]}]}), createdTimeMs=1507557031875, sendTimeMs=1507557031875) with correlation id 25 due to node 1005 being disconnected
2017-10-09 16:51:30.338 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.Fetcher$1 - Fetch failed org.apache.kafka.common.errors.DisconnectException
2017-10-09 16:51:30.341 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater - Initialize connection to node 1006 for sending metadata request
2017-10-09 16:51:30.341 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.NetworkClient - Initiating connection to node 1006 at broker001.domain.loc:9092.
2017-10-09 16:51:30.342 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1006.bytes-sent
2017-10-09 16:51:30.342 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1006.bytes-received
2017-10-09 16:51:30.342 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.common.metrics.Metrics - Added sensor with name node-1006.latency
2017-10-09 16:51:30.343 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.NetworkClient - Completed connection to node 1006
2017-10-09 16:51:30.343 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler - Cancelled FETCH request ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@7d9e82c8, request=RequestSend(header={api_key=1,api_version=2,correlation_id=26,client_id=consumer-1}, body={replica_id=-1,max_wait_time=500,min_bytes=1,topics=[{topic=topic2,partitions=[{partition=7,fetch_offset=136136613,max_bytes=1048576}]}]}), createdTimeMs=1507557090341, sendTimeMs=0) with correlation id 26 due to node 1005 being disconnected
2017-10-09 16:51:30.343 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.Fetcher$1 - Fetch failed org.apache.kafka.common.errors.DisconnectException
2017-10-09 16:51:30.343 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater - Sending metadata request {topics=[topic2]} to node 1006
2017-10-09 16:51:30.344 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler - Cancelled FETCH request ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@4512b012, request=RequestSend(header={api_key=1,api_version=2,correlation_id=27,client_id=consumer-1}, body={replica_id=-1,max_wait_time=500,min_bytes=1,topics=[{topic=topic2,partitions=[{partition=7,fetch_offset=136136613,max_bytes=1048576}]}]}), createdTimeMs=1507557090343, sendTimeMs=0) with correlation id 27 due to node 1005 being disconnected
2017-10-09 16:51:30.344 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.Fetcher$1 - Fetch failed org.apache.kafka.common.errors.DisconnectException
2017-10-09 16:51:30.344 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.Metadata - Updated cluster metadata version 3 to Cluster(nodes = [broker002.domain.loc:9092 (id: 1005 rack: null), broker001.domain.loc:9092 (id: 1006 rack: null)], partitions = [Partition(topic = topic2, partition = 14, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,], Partition(topic = topic2, partition = 13, leader = 1005, replicas = [1005,1006,], isr = [1005,1006,], Partition(topic = topic2, partition = 12, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,], Partition(topic = topic2, partition = 11, leader = 1005, replicas = [1005,1006,], isr = [1005,1006,], Partition(topic = topic2, partition = 10, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,], Partition(topic = topic2, partition = 9, leader = 1005, replicas = [1005,1006,], isr = [1005,1006,], Partition(topic = topic2, partition = 8, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,], Partition(topic = topic2, partition = 7, leader = 1005, replicas = [1005,1006,], isr = [1005,1006,], Partition(topic = topic2, partition = 6, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,], Partition(topic = topic2, partition = 5, leader = 1005, replicas = [1005,1006,], isr = [1005,1006,], Partition(topic = topic2, partition = 4, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,], Partition(topic = topic2, partition = 3, leader = 1005, replicas = [1005,1006,], isr = [1005,1006,], Partition(topic = topic2, partition = 2, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,], Partition(topic = topic2, partition = 1, leader = 1005, replicas = [1005,1006,], isr = [1005,1006,], Partition(topic = topic2, partition = 0, leader = 1006, replicas = [1005,1006,], isr = [1006,1005,]])
2017-10-09 16:51:30.345 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler - Cancelled FETCH request ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@4214186f, request=RequestSend(header={api_key=1,api_version=2,correlation_id=29,client_id=consumer-1}, body={replica_id=-1,max_wait_time=500,min_bytes=1,topics=[{topic=topic2,partitions=[{partition=7,fetch_offset=136136613,max_bytes=1048576}]}]}), createdTimeMs=1507557090344, sendTimeMs=0) with correlation id 29 due to node 1005 being disconnected
2017-10-09 16:51:30.345 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.Fetcher$1 - Fetch failed org.apache.kafka.common.errors.DisconnectException
2017-10-09 16:51:42.942 DEBUG    [LeaseRenewer:hdfs_user@master001.domain.loc:8020]:org.apache.hadoop.hdfs.LeaseRenewer - Lease renewer daemon for [] with renew id 1 executed
2017-10-09 16:52:00.293 DEBUG    [IPC Client (1926664485) connection to master001.domain.loc/10.0.10.1:8020 from hdfs_user]:org.apache.hadoop.ipc.Client$Connection - IPC Client (1926664485) connection to master001.domain.loc/10.0.10.1:8020 from hdfs_user: closed
2017-10-09 16:52:00.293 DEBUG    [IPC Client (1926664485) connection to master001.domain.loc/10.0.10.1:8020 from hdfs_user]:org.apache.hadoop.ipc.Client$Connection - IPC Client (1926664485) connection to master001.domain.loc/10.0.10.1:8020 from hdfs_user: stopped, remaining connections 0
2017-10-09 16:52:10.388 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler - Cancelled FETCH request ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@4b954a27, request=RequestSend(header={api_key=1,api_version=2,correlation_id=30,client_id=consumer-1}, body={replica_id=-1,max_wait_time=500,min_bytes=1,topics=[{topic=topic2,partitions=[{partition=7,fetch_offset=136136613,max_bytes=1048576}]}]}), createdTimeMs=1507557090345, sendTimeMs=0) with correlation id 30 due to node 1005 being disconnected
2017-10-09 16:52:10.389 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.consumer.internals.Fetcher$1 - Fetch failed org.apache.kafka.common.errors.DisconnectException
2017-10-09 16:52:10.389 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.NetworkClient - Initiating connection to node 1005 at broker002.domain.loc:9092.
2017-10-09 16:52:10.390 DEBUG    [Executor task launch worker for task 315]:org.apache.kafka.clients.NetworkClient - Completed connection to node 1005
2017-10-09 16:52:10.397 DEBUG    [Executor task launch worker for task 315]:org.apache.spark.internal.Logging$class - Polled [topic2-7]  2603
2017-10-09 16:52:10.398 DEBUG    [Executor task launch worker for task 315]:org.apache.spark.internal.Logging$class - Getting local block broadcast_13
2017-10-09 16:52:10.398 DEBUG    [Executor task launch worker for task 315]:org.apache.spark.internal.Logging$class - Level for block broadcast_13 is StorageLevel(disk, memory, deserialized, 1 replicas)

我可以做些什么来克服这类问题? 增加request.timeout.ms参数对我来说似乎不是一个好的解决方案。

我已经看到一个禁用Kafka消费者缓存的参数可以解决这个问题,但它可以在Spark 2.2中使用,我不能去Spark 2.2。

我现在只能看到的解决方案应该是回到单一主题处理......

感谢您的帮助!

2017/10/18:有关此问题的更新
处理主题/分区的执行程序的开关是由于数据局部性问题。对于某些主题/分区,本地处理数据所需的执行程序(位置级别PROCESS_LOCAL)不可用,因此另一个执行程序被安排处理(位置级别RACK_LOCAL),并且此执行程序可以与批处理不同。

我的配置是每个遗嘱执行人1个核心 我将配置更改为每个执行程序允许2个核心,并且没有问题,所有任务都在本地处理 如果想要处理3个主题,我必须将我的配置更改为每个执行程序3个核心(主题不均匀,topic1为15个分区,topic2为3个,主题3为6个,例如3个主题)。

1个主题,24个主题/分区,24个执行者,每个执行者1个核心:OK
2个主题,24个主题/分区,12个执行器,每个执行器2个核心:OK
3个主题,24个主题/分区,8个执行器,每个执行器3个核心:OK
4个主题,24个主题/分区,6个执行器,每个执行器4个核心:OK
6个主题,24个主题/分区,4个执行器,每个执行器6个核心:KO

有6个主题我再次遇到数据局部性问题。 如何根据主题数量来扩展Spark流程?

1 个答案:

答案 0 :(得分:0)

在您的RDD上执行分区 ,它将触发改组,并确保每个执行者几乎都具有相等的本地数据(内存中)要处理。
对于您的6个主题示例,请尝试使用12个执行程序,每个执行程序2个内核和.repartition(48)
在您对Kafka使用者对给定RDD进行的任何转换/操作之前,请致电 repartition

请注意,分区可能会对性能产生影响。