Cannot connect to kafka docker container from logstash docker container

connect to kafka running in docker
kafka 19092
kafka_cfg_advertised_listeners
kafka image
kafka xmlipcregsvc
inter broker listener name must be a listener name defined in advertised listeners
how does kafka listener work
kafka 9092 vs 29092

I am trying to connect to a kafka docker container from a logstash docker container but I always get the following message:

 Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

My docker-compose.yml file is

version: '3.2'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
    networks:
      - elk
    depends_on:
      - kafka

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
      - "5000:5000"
      - "9600:9600"
    links:
      - kafka
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

  zookeeper:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    container_name: zookeeper
    command: [
      "sh", "-c",
      "bin/zookeeper-server-start.sh config/zookeeper.properties"
    ]
    ports:
      - "2181:2181"
    networks:
      - elk
    environment:
      LOG_DIR: /tmp/logs

  kafka:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    command: [
      "sh", "-c",
      "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
    ]
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    networks:
      - elk
    environment:
      LOG_DIR: "/tmp/logs"
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:

and my logstash.conf file is

input {
    kafka{
        bootstrap_servers => "kafka:9092"
        topics => ["logs"]
    }
}

## Add your filters / logstash plugins configuration here

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
    }
}

All my containers are running normally and I can send messages to Kafka topics outside of the containers.

You need to define your listener based on the hostname at which it can be resolved from the client. If the listener is localhost then the client (logstash) will try to resolve it as localhost from its own container, hence the error.

I've written about this in detail here but in essence you need this:

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092, PLAINTEXT://kafka:29092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092, PLAINTEXT://kafka:29092

Then any container on the Docker network uses kafka:29092 to reach it, so logstash config becomes

bootstrap_servers => "kafka:29092"

Any client on the host machine itself continues to use localhost:9092.

You can see this in action with Docker Compose here: https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/docker-compose.yml#L40

Unable to connect to Kafka docker · Issue #348 · wurstmeister/kafka , I have exposed ports for my broker and zookeeper but cannot seem to over come this issue. docker container run -it --network=host -p2181:2181 -  my producer and consumer are within a containerised microservice within Docker that are connecting to my local KAFKA broker. I have exposed ports for my broker and zookeeper but cannot seem to over come this issue. docker container run -it --network=host -p2181:2181 -p8097:8097 --name kafka image

The Kafka advertised listers should be defined like this

KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://kafka:9092   
KAFKA_LISTENERS: PLAINTEXT://kafka:9092

No connection between Kafka and Logstash - Logstash, Logstash Kafka Input -> Output to Elasticsearch -> Kibana This does not work! Docker-Compose-File: zookeeper: container_name:  docker-compose for Kafka, Zookeeper and Logstash. This set up is merely a demonstration for pushing logs to Logstash which then pushes the logs to a Kafka topic. This set up is based on kafka-docker using the single-broker configuration. Pre-requisites. Docker needs to be installed. docker-compose needs to be installed. Usage

You can use the HOST machines IP address for Kafka advertised listeners that way your docker services as well as the services which are running outside your docker network can access it.

KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://$HOST_IP:9092 KAFKA_LISTENERS: PLAINTEXT://$HOST_IP:9092

For reference you can go through this article https://rmoff.net/2018/08/02/kafka-listeners-explained/

Docker Quick Start, Status: Downloaded newer image for confluentinc/cp-kafka:latest Creating For that reason, you'll need a separate terminal for each Docker image In that case, you'll see a message indicating that the ZooKeeper service could not bind to the Center image the same as you've done for earlier containers, connecting to  and the route from inside the container / # ip route show. 172.17.0.0/16 dev eth0 scope link src 172.17.0.2 Clearly I have this problem with all the containers, not only busybox: for example, also by opening a shell in an alpine docker container I cannot pass through bridge.

elk-docker, Elasticsearch, Logstash, Kibana (ELK) Docker image documentation; Contents Connecting a Docker container to an ELK container running on the same host must be changed on the host; they cannot be changed from within a container. This leaves me a little confused. curl was able to connect to the logstash host, correct? From where did you run both curl commands? These use of IP 127.0.0.1 makes me wonder if you accidentally bind the logstash ports to the internal container only, such that the ports are not available to other containers.

Docker compose database connection refused, Then I was able to run my containers with: sudo docker-compose up. Hello, I got an issue while i try to connect logstash with elasticsearch and i can't solve trên một container, connect đến MySQL database trên mộtKafka Connect stores all  You can find one Docker image for Logstash in the docker hub. You just need to pull that image from the docker hub and run in your local system. Use the below-given command to launch a Logstash container. $ docker pull logstash $ docker run -itd --name logstash logstash:7.7.1

Deploying Kafka with the ELK Stack, Protect Logstash and Elasticsearch against such data bursts using Kafka. Filebeat – collects logs and forwards them to a Kafka topic. security group to enable access from anywhere using SSH and TCP 5601 (for Kibana). We cannot afford to have our logging infrastructure fail at the exact point in time  Are you sure about the elasticsearch endpoint? Have your tried run the container or image on the same network and ping the different hosts. Each container has it's own IPv4 address and therefore should have it's own hostname within the virtual network (docker itself provides the DNS service for the containers).

Comments
  • Setting kafka advertised listeners this way prevents producing messages to kafka from an application that i have that's not dockerized via localhost. Isn 't any way to access it from both inside docker and outside as well?
  • In which case then replace Kafka with the fqdn of the host machine on which itis getting deployed.