下面是我的生产者配置,如果你看到他们的压缩类型是gzip,即使我提到了压缩类型,为什么消息没有发布而且它失败了
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, edi856KafkaConfig.getBootstrapServersConfig());
props.put(ProducerConfig.RETRIES_CONFIG, edi856KafkaConfig.getRetriesConfig());
props.put(ProducerConfig.BATCH_SIZE_CONFIG, edi856KafkaConfig.getBatchSizeConfig());
props.put(ProducerConfig.LINGER_MS_CONFIG, edi856KafkaConfig.getIntegerMsConfig());
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, edi856KafkaConfig.getBufferMemoryConfig());
***props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.IntegerSerializer");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");***
props.put(Edi856KafkaProducerConstants.SSL_PROTOCOL, edi856KafkaConfig.getSslProtocol());
props.put(Edi856KafkaProducerConstants.SECURITY_PROTOCOL, edi856KafkaConfig.getSecurityProtocol());
props.put(Edi856KafkaProducerConstants.SSL_KEYSTORE_LOCATION, edi856KafkaConfig.getSslKeystoreLocation());
props.put(Edi856KafkaProducerConstants.SSL_KEYSTORE_PASSWORD, edi856KafkaConfig.getSslKeystorePassword());
props.put(Edi856KafkaProducerConstants.SSL_TRUSTSTORE_LOCATION, edi856KafkaConfig.getSslTruststoreLocation());
props.put(Edi856KafkaProducerConstants.SSL_TRUSTSTORE_PASSWORD, edi856KafkaConfig.getSslTruststorePassword());
**props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "gzip");**
并且错误到达
org.apache.kafka.common.errors.RecordTooLargeException: The message is 1170632 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
2017-12-07_12:34:10.037 [http-nio-8080-exec-1] ERROR c.tgt.trans.producer.Edi856Producer - Exception while writing mesage to topic= '{}'
org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.RecordTooLargeException: The message is 1170632 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
并希望消费者配置我们需要使用我想要消费者方面的kafka消息的字符串表示
答案 0 :(得分:4)
不幸的是,您在Kafka中遇到了一个相当奇怪的新生产者实现问题。
虽然Kafka在代理级别应用的邮件大小限制应用于单个压缩记录集(可能是多个邮件),但新生产者当前对{0}}应用max.request.size
限制在任何压缩之前记录。
这已在https://issues.apache.org/jira/browse/KAFKA-4169中创建(创建于14 / Sep / 16,在撰写本文时尚未解决)。
如果您某些消息的压缩大小(加上记录集的任何开销)将小于经纪人配置的max.message.bytes
,那么可能> em>能够在生产者上增加max.request.size
属性的价值而不必更改代理上的任何配置。这将允许Producer代码接受预压缩有效负载的大小,然后将其压缩并发送给代理。
然而,重要的是要注意,如果Producer尝试发送对于代理配置来说太大的请求,代理将拒绝该消息,并且由您的应用程序来正确处理此消息。
答案 1 :(得分:0)
只需阅读错误消息:)
The message is 1170632 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration
消息是> 1 MByte是Apache Kafka允许的默认值。要允许大邮件,请检查How can I send large messages with Kafka (over 15MB)?
中的答案