host name not working with docker swarm mode

docker swarm dns
docker swarm network host
docker swarm communication between nodes
docker ingress network
docker swarm over internet
only networks scoped to the swarm can be used, such as those created with the overlay driver
docker overlay network without swarm
docker service discovery

I am using docker version 18.06.1-ce and compose version 1.22.0.

As per docker, it should be possible to call services using service names. This is working for me with docker compose without swarm mode, but on swarm mode it is not working. I have even tried setting aliases in my compose but no result.

Below is my docker-compose.yml

version: "3"

networks:
  my_network:
    external:
      name: new_network


services:
  config-service:
    image: com.test/config-service:0.0.1
    deploy:
      placement:
        constraints: [node.role == manager]
      resources:
        limits:
          memory: 1024M
        reservations:
          memory: 768M
      restart_policy:
        condition: on-failure
    healthcheck:
      test: ["CMD", "curl", "-f", "http://config-service:8888/health"]
      interval: 5s
      timeout: 3s
      retries: 5
    ports:
      - 8888:8888
    networks:
      my_network:
        aliases:
          - config-service

  eureka-service:
    image: com.test/eureka-service:0.0.1
    deploy:
      placement:
        constraints: [node.role == manager]
      resources:
        limits:
          memory: 1536M
        reservations:
          memory: 1024M
      restart_policy:
        condition: on-failure
    healthcheck:
      test: ["CMD", "curl", "-I", "http://eureka-service:8761/health"]
      interval: 5s
      timeout: 3s
      retries: 5
    ports:
      - 8761:8761
    depends_on:
      - config-service
    networks:
      my_network:
        aliases:
          - eureka-service

When I inspect into my network I found

[
    {
        "Name": "new_network",
        "Id": "s2m7yq7tz4996w7eg229l59nf",
        "Created": "2018-08-30T13:58:59.75070753Z",
        "Scope": "swarm",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "355efe27067ee20868455dabbedd859b354d50fb957dcef4262eac6f25d10686": {
                "Name": "test_eureka-service.1.a4pjb3ntez9ly5zhu020h0tva",
                "EndpointID": "50998abdb4cd2cd2f747fadd82be495150919531b81a3d6fb07251a940ef2749",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            },
            "5cdb398c598c1cea6b9032d4c696fd1581e88f0644896edd958ef59895b698a4": {
                "Name": "test_config-service.1.se8ajr73ajnjhvxt3rq31xzlm",
                "EndpointID": "5b3c41a8df0054e1c115d93c32ca52220e2934b6f763f588452c38e60c067054",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Now if I connect into containers terminal and ping using the long name 'test_config-service.1.se8ajr73ajnjhvxt3rq31xzlm' it is able to ping but not 'config-service'.

I believe the issue you are experiencing is because you are using a swarm scoped bridge network, instead of an overlay network. I'm not sure if this configuration is supported. The DNS entry for the service when deployed in swarm mode is at the service level, not the individual containers. From my testing, that DNS entry, along with the code to setup a VIP, only appear to work with overlay networks. You may want to follow this issue if you really need your network to be configured as a bridge: https://github.com/moby/moby/issues/37672

Otherwise, the easy fix is to replace your network with an overlay network. You can remove your network aliases since they are redundant. And if you have other containers on the host that need to also be on this network, from outside of swarm mode, be sure to configure your overlay network as "attachable". If you have other applications currently attached to the network, you can replace that with a new network, or if you need to keep the same network name, swap it out in two phases:

# create a temporary network to free up the new_network name
docker network create -d overlay --attachable temp_network
docker network connect temp_network $container_id # repeat for each container
# finish the above step for all containers before continuing
docker network disconnect new_network $container_id #repeat for each container
# remove the old bridge network
docker network rm new_network

# now create a new_network as overlay
docker network create -d overlay --attachable new_network
docker network connect new_network $container_id # repeat for each container
# finish the above step for all containers before continuing
docker network disconnect temp_network $container_id #repeat for each container
# cleanup the temporary network
docker network rm temp_network

If everything is running in swarm mode, then there's no need for --attachable. After that, you should be able to start your swarm mode stack.

Run Docker Engine in swarm mode, How can I tell if Docker is running in swarm mode? If a swarm node fails, Docker starts new containers to account for this failure but these new containers have different hostnames. As a result, RabbitMQ assumes that the corresponding RabbitMQ node that was running on the failed swarm node is indefinitely down - since it doesn't recognise the new hostname on the new container.

Try to list your services with a docker service ls command. Because if you use stack and give a name to your stack the service name will be nameofstack_config-service

And I see in your inspect test_eureka-service.1xxxxxx so the service name should be test_eureka-service

Manage nodes in a swarm, What effect does the routing mesh have on a docker swarm cluster? So that your service name will act as the host name. Also the host name with underscores should not cause any problem. Try finding out the actual rootcause. Edit: Your service name and the host name is web. And I can't say about this line, without looking at the docker file. environment: EUREKA_HOST: eureka

This is a known issue with version 18.06:

https://github.com/docker/for-win/issues/2327

https://github.com/docker/for-linux/issues/375

Try 18.03

what is routing mesh under docker swarm mode?, routes all incoming requests to published ports on available nodes to an active container. @outcoldman Docker Service Discovery has always been based on the container name and not the hostname of the container. The behavior you are seeing is expected. In 1.12 with the introduction of services, the discovery works for service name as well.

Hostname is not resolvable if created by template in swarm , Given the following working swarm compose yml results in not condition: any mode: replicated replicas: 3 update_config: delay: 2s redis: networks: - net docker exec registry_app.1.pf1ia54eyankjqo0y0egs43wh hostname  In Docker swarm and swarm mode routing mesh Whenever we deploy a service to it without specifying an external network it connects to the ingress network. When you specify an external overlay network you can notice that the created overlay network will be available only to the manager and not in the worker node unless a service is created and is

Cannot access containers by hostname with Docker overlay driver in , Cannot access containers by hostname with Docker overlay driver in Swarm Mode #25236. Closed. outcoldman opened this issue on Jul 29,  In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users should use integrated Swarm mode — a good place to start is Getting started with swarm mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone Docker Swarm is not integrated into the Docker Engine API and CLI commands.

Use swarm mode routing mesh, Docker Engine swarm mode makes it easy to publish ports for services to The routing mesh listens on the published port for any IP address assigned to the If you access a node which is not running a service task, the service does not  With docker ignoring the fixed ipv4 addresses I set in the compose definition, you end up with consul containers advertising ip addresses they do not own. This will never lead to a working consul cluster. Steps to reproduce the issue: Docker Swarm running in Swarm Mode, 3 manager-worker nodes and 2 worker-only nodes; Deploy the following v3-yml

Comments
  • With the overlay network it works, although I have to add hostname to compose file. It wasn't taking service name as the hostname automatically.
  • @rohit the hostname and DNS entries are independent of each other. Note that there is no DNS entry for the hostname.
  • docker service ls gives service name as test_eureka-service. But if I try to ping test_eureka-service from inside the container, it fails.
  • I have downgraded docker version to 18.03.1-ce, but the issue still exists.