why kafka producer is showing me error kafka.conn:DNS lookup failed for <container id>:9092?

kafka listen on all interfaces
kafka 19092
inter broker listener name must be a listener name defined in advertised listeners
connect to kafka running in docker
no security protocol defined for listener plaintext
kafka docker
dns resolution failed for url in bootstrap servers
failed to resolve 'kafka:9092

I expose the port 9092 then i run the kafka broker inside docker. But When I run the python script i get the errors

ERROR:kafka.conn:DNS lookup failed for b5c5b06f6761:9092 (AddressFamily.AF_UNSPEC)

I tried docker ip and machine ip instead of localhost but gives same error.

Here is my code.

producer = KafkaProducer(bootstrap_servers=['localhost:9092'],
                         value_serializer=lambda x:
                         dumps(x).encode('utf-8'))

producer.send('vtintel', value={'id':123})

Docker only handles DNS within its own network, not from your host

You need to Kafka to advertise itself externally (on localhost), which is different than just a port forward

And as far as I can tell -p 9092:9092 is not a port even exposed by the container image you're using

KafkaTimeoutError: Failed to update metadata (again, server name , ERROR:kafka.conn:DNS lookup failed for kafka:9092 AF_UNSPEC)``` Extended logging shows ```DEBUG:kafka.producer.sender:Node  The consumer filled up ~1TB logs over the course of 3 days, but did not throw an exception. Example logs: kafka.cluster INFO Group coordinator for my-group is BrokerMetadata(nodeId=102, host=u'kafka-2-broker.example.com', port=9092, rack=None) kafka.cluster INFO Group coordinator for my-group is BrokerMetadata(nodeId=102, host=u'kafka-2-broker.example.com', port=9092, rack=None) kafka.conn

Though it's a late reply it might help anyone later. I have the same problem as you were facing I was using dockerized implementation of edX. So to fix that issue just add the following lines in /etc/hosts file of your docker container. first your IP Address then the thing for which the lookup is being failed. for example in your case lookup is being failed for b5c5b06f6761 so:

173.16.18.22     b5c5b06f6761

Note: Here I am using dummy IP address.

KafkaConsumer ctor hangs in case it can't resolve bootstrap server , WARNING:kafka.conn:DNS lookup failed for kafkak:9092, exception was [Errno -​2] Name or service not known. Is your advertised.listeners  WARNING:kafka.conn:DNS lookup failed for loaclhost:9095, exception was [Errno 11001] getaddrinfo failed. Is your advertised.listeners (called advertised.host.name before Kafka 9) correct and resolvable? ERROR:kafka.conn:DNS lookup failed for loaclhost:9095 (AddressFamily.AF_UNSPEC)

Same issue with bitnami/kafka. But then I realized that I need to enable accessing Kafka with external clients in docker-commpose.yml. For more info see https://hub.docker.com/r/bitnami/kafka/ 'Accessing Kafka with internal and external clients' part.

To do so, add the following environment variables to your docker-compose:

    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
+     - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
+     - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
+     - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092

And expose the extra port:

    ports:
      - '9092:9092'
+     - '29092:29092'


and access it by 'localhost:29092' not 'localhost:9092' in your python-kafka code.

[#KAFKA-6195] DNS alias support for secured connections, It seems clients can't use a dns alias in front of a secured Kafka cluster. Using an alias in bootstrap.servers results in the following error : javax.security.sasl. SaslException: GSS initiate failed [Caused by GSSException: No valid /src/​main/java/org/apache/kafka/clients/producer/KafkaProducer.java KAFKA-6195: Add new parameter to toggle reverse dns lookup KAFKA-6195: Remove specific handling for SSL. Reverse DNS lookup performed regardless of the SecurityProtocol used if bootstrap.reverse.dns.lookup is set to true. KAFKA-6195 introduce enum to drive dns lookup behaviour Remove star import adding apache license header

I had a similar problem earlier with latest kafka versions. Try mentioning the local address as '127.0.0.1' instead of 'localhost'. this might help.

KIP-302, Public Interfaces. Client configuration. Add a new allowed value for the configuration parameter client.dns.lookup introduced by KIP-235. kafka_2.9.2-0.8.2.0 bin/kafka-topics.sh --list --zookeeper 192.168.1.76:2181 kafka_2.9.2-0.8.2.0 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test Created topic "test". kafka_2.9.2-0.8.2.0 bin/kafka-topics.sh --list --zookeeper 192.168.1.76:2181 test kafka_2.9.2-0.8.2.0 bin/kafka-console

[#KAFKA-2657] Kafka clients fail to start if one of broker isn't , Kafka clients fail to start if one of broker isn't resolved by DNS. Status: Assignee: KafkaProducer and org.apache.kafka.clients.consumer. Teams. Q&A for Work. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information.

Kafka Listeners – Explained, In this post, I'll talk about why this is necessary and then show how to do it based Why can I connect to the broker, but the client still fails? key: null, value: 4 bytes with error: (org.apache.kafka.clients.producer.internals. a lot more sense​—connecting to one hostname, getting a lookup error on another:. After switching to new kafka-python 1.4.5 and upgrading to python 3.7 I keep getting these logs in infinite loop: Give up sending metadata request since no node is available I have noticed that broker list is always empty and node-id=boo

kafka.conn, Source code for kafka.conn _gai: log.error('DNS lookup failed for %s:%i (%s)', self.host, self.port, self.afi) return False return True def _next_afi_sockaddr(self):  The kafka-console-producer.sh script (kafka.tools.ConsoleProducer) will use the new producer instead of the old producer be default, and users have to specify 'old-producer' to use the old producer. By default all command line tools will print all logging messages to stderr instead of stdout. Notable changes in 0.9.0.1

Comments
  • what's the command you used for starting the docker image?
  • i suspect you need to add something in your hosts file (if you use windows)
  • docker run -p 8081-8110:8081-8110 -p 8200:8200 -p 9095:9095 -p 9097:9097 -p 9999:9999 -p 9092:9092 -d --name imply imply/imply
  • @Gremi64 That's a hack, please never do that and just fix the actual problem... rmoff.net/2018/08/02/kafka-listeners-explained
  • you'll find your answer in @cricket_007 link, or in this thread stackoverflow.com/questions/52438822/… or this one stackoverflow.com/questions/51630260/…