Kafka: The message when serialized is larger than the maximum request size you have configured with the max.request.size configuration

default max.request.size kafka
kafka message size best practice
kafka message size too large
max kafka payload size
kafka message size 1mb
kafka check message size
recordtoolargeexception
max.request.size + cloudera

Getting the following error (Kafka 2.1.0):

2018-12-03 21:22:37.873 ERROR 37645 --- [nio-8080-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{82, 73, 70, 70, 36, 96, 19, 0, 87, 65, 86, 69, 102, 109, 116, 32, 16, 0, 0, 0, 1, 0, 1, 0, 68, -84,...' to topic recieved_sound: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1269892 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

I tried all the suggestions in various SO posts.

My Producer.properties:

max.request.size=41943040
message.max.bytes=41943040
replica.fetch.max.bytes=41943040
fetch.message.max.bytes=41943040

Server.properties:

socket.request.max.bytes=104857600
message.max.bytes=41943040
max.request.size=41943040
replica.fetch.max.bytes=41943040
fetch.message.max.bytes=41943040

ProducerConfig (Spring Boot):

configProps.put("message.max.bytes", "41943040");
configProps.put("max.request.size", "41943040");
configProps.put("replica.fetch.max.bytes", "41943040");
configProps.put("fetch.message.max.bytes", "41943040");

ConsumerConfig (SpringBoot):

props.put("fetch.message.max.bytes", "41943040");
props.put("message.max.bytes", "41943040");
props.put("max.request.size", "41943040");
props.put("replica.fetch.max.bytes", "41943040");
props.put("fetch.message.max.bytes", "41943040");

I also changed Strings to numbers in the last 2 files. Started brokers multiple times, and created new topics. I was getting org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept error initially, which got fixed by these changes, but still no luck with this new error.

Set a breakpoint in KafkaProducer.ensureValidRecordSize() to see what's going on.

With this app

@SpringBootApplication
public class So53605262Application {

    public static void main(String[] args) {
        SpringApplication.run(So53605262Application.class, args);
    }

    @Bean
    public NewTopic topic() {
        return new NewTopic("so53605262", 1, (short) 1);
    }

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template) {
        return args -> template.send("so53605262", new String(new byte[1024 * 1024 * 2]));
    }

}

I get

The message is 2097240 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

as expected; when I add

spring.kafka.producer.properties.max.request.size=3000000

(which is the equivalent of your config but using Spring Boot properties), I get

The request included a message larger than the max message size the server will accept.

If debugging doesn't help, perhaps you can post a complete small app that exhibits the behavior you see.

Solved: RecordTooLargeException on large messages in Kafka , 16777327 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. The message is 1978246 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. I was looking at the source code of kafka-rest but don't see any property there that I could change.

You can change message size if Kafka property is file on a server.

for default sever.property file

#/usr/local/kafka/config
#message.max.bytes=26214400

producer.properties->

# the maximum size of a request in bytes
# max.request.size=26214400

same for conusmer

unable to send a large message · Issue #208 · confluentinc/kafka , Hi, I'm using the kafka-rest in Confluent 3.0.0 on Ubuntu. serialized which is larger than the maximum request size you have configured with the the maximum size of a request in bytes max.request.size=15728640 Setting custom Configurations for Producer & Consumer via Kafka Connect running in  Producer side: "max.request.size" is a limit to send the larger message. Consumer side: Increase "max.partition.fetch.bytes" which help you to consume big messages. max number of bytes per partition returned by the server. should be larger than the max.message.size so consumer can read the largest message sent by the broker.

You should set config in producer in such way

Props.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, "41943040");

Solution: Kafka queue processor size issue(max.request.size , The message is 10812412 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. Spark readStream for Kafka fails with the following errors:. org.apache.kafka.common.errors.RecordTooLargeException (The message is 1166569 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.)

RecordTooLargeException Error in the Filler component – Instana, 2018-06-25 13:08:19,328 [filled-metric-kafka-downstream-0] ERROR RecordTooLargeException: The message is 1287883 bytes when serialized which is larger than the maximum request size you have configured with the the max.request.size in the filler config:/opt/instana/filler/etc/config.yaml. kafka:  Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1513657 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration Tried to change max.request.size on producer , but that requires brokers changes to and restart of broker .

Oracle Kafka Replicat Abends OGG-15051 Larger Than The , Oracle Kafka Replicat abended with below messages. Replicat report file shows: RecordTooLargeException: The message is 28425503 bytes when serialized which is larger than the maximum request size you have configured with the max.​request.size configuration. 2016-01-05 09:42:58 ERROR  [2019-03-05T17:19:58,758][WARN ][logstash.outputs.kafka ] KafkaProducer.send() failed: org.apache.kafka.common.errors.RecordTooLargeException: The message is 56190054 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. {:exception=>java.util.concurrent

org.apache.kafka.common.errors.RecordTooLargeException java , throw new RecordTooLargeException("The message is " + size + throw new which is larger than the maximum request size you have configured with the " + + size + " bytes when serialized which is larger than the total memory buffer you have This happens when a single message is larger than the per-partition limit. Please verify if the topic exists or the connection to the broker can be established.org.apache.kafka.common.errors.RecordTooLargeException: The message is 28425503 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

Comments
  • Your configurations look fine. I am guessing maybe you did not deploy the changeds to all brokers? Can you check the broker config using bin/kafka-configs.sh to make sure your configurations are correct on all brokers?
  • also add max.partition.fetch.bytes
  • max.partition.fetch.bytes is a soft limit. from documentation: If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.
  • You might need to add kaka.max.partition.fetch.bytes instead of max.partition.fetch.bytes to the client properties