Docker-swarm overlay network is not working for containers in different hosts
docker overlay network
docker overlay network without swarm
docker swarm dns
only networks scoped to the swarm can be used, such as those created with the overlay driver
docker swarm publish port
docker overlay network encryption
docker stack network
We have a networking problem in docker-swarm. The problem is below;
- we have virtualized environment over wmware ( vsphere 6.02)
- our servers are created from vmware say server1 and server2
- we have a docker compose file defining a couple of services
- we have an overlay-network definition within docker-compose for docker-swarm
- when we deploy system using docker-swarm deployment is finished successfully, all containers gets ip from overlay network range.
- But the problem is if 2 containers (say cnt1 and cnt2) are deployed to different servers they can not ping each other
- I check tcpdump and see that ARP communication is successfull so they know each other mac correctly
- But when you try to ping to container, ICMP Echo messages are send but are not delivered to second machine..
Where should I check, any advices?
server-1:~$ docker version Client: Version: 17.03.0-ce API version: 1.26 Go version: go1.7.5 Git commit: 3a232c8 Built: Tue Feb 28 08:01:32 2017 OS/Arch: linux/amd64 Server: Version: 17.03.0-ce API version: 1.26 (minimum version 1.12) Go version: go1.7.5 Git commit: 3a232c8 Built: Tue Feb 28 08:01:32 2017 OS/Arch: linux/amd64 Experimental: true
ps: I checked this post but I have latest version of docker / docker-swarm so the issue should be fixed..
ps-2: similar problem; https://github.com/docker/swarm/issues/2687
Out of curiosity, in your VMware environment, do you have NSX deployed? I may have an answer, but it only applies if NSX is deployed in the environment.
ESXi will apparently drop OUTBOUND packets from VMs if the destination port is the same as the port configured for the VXLAN VTEP communication.
NSX utilizes port 4789/udp for VTEP communication for VXLAN (by default, as of 6.2.3; prior to that, it was 8472/udp). (If the VMs are on the same host, then traffic is not dropped, because, while it may be OUTBOUND traffic, it does not egress the host, and does not get to the same stage within the VMKernel to be dropped.)
The wording in KB2079386 is a little off. It states:
VXLAN port 8472 is reserved or restricted for VMware use, any virtual machine cannot use this port for other purpose or for any other application.
But, it should read:
VTEP Port is reserved or restricted for VMware use, any virtual machine cannot use this port for other purpose or for any other application.
If you are using NSX, you could try changing the port used for the VXLAN VTEPs, but port 4789/udp is required if you are going to leverage hardware VTEPs at all.
(I can't take full credit for this. I stumbled across this blog post talking about similar behavior when troubleshooting a similar issue.)
Use overlay networks, In this example, you start two different alpine containers on the same Docker host and do some tests to understand how These databases are in containers attached to the same overlay network of the services, but out of the swarm. I created the swarm and the network in the macbook. docker swarm init docker network create --attachable --driver overlay --subnet 10.0.1.0/24 test-net I have this compose file to run one of the databases
Networking with overlay networks, Existing containers in the overlay network that are on different nodes cannot connec GitHub is home to over 50 million developers working together to host and review a container in an overlay network on a different swarm node: `could not resolve peer Swarm image in Docker 1.12.3 docker/swarm-library-image#12. However, I am running different subnets for the AWS hosts and the overlay network. The hosts are running in the subnet 126.96.36.199. Here's the output of /etc/resolve.conf for one of the hosts that can't resolve DNS for containers running on it.
If your nodes are not on the same subnet (eg. they all have public IPs) - then make sure you use the
--advertise-addr option specifying the IP address that the other nodes can reach when that node (other managers AND workers) joins the swarm.
Otherwise the overlay network will not route correctly between hosts even though stack deployment & node registration etc appear to be working fine.
See the detailed explanation for my case in the same GitHub issue --> https://github.com/docker/swarm/issues/2687
Cannot connect to a container in an overlay network from a different , docker info Containers: 31 Running: 28 Paused: 0 Stopped: 3 Images: 85 Server Version: "No route to host" on Docker Swarm cluster (overlay networking) #1814 Containers not reachable from one host to another #2687. Use an overlay network for standalone containers. This example demonstrates DNS container discovery -- specifically, how to communicate between standalone containers on different Docker daemons using an overlay network. Steps are: On host1, initialize the node as a swarm (manager). On host2, join the node to the swarm (worker).
"VTEP Port is reserved or restricted for VMware use, any virtual machine cannot use this port for other purpose or for any other application."
But we can change docker swarm data-path-port(the default port number 4789 is used) to another:
docker swarm init --data-path-port=7789
containers are unable to communicate on overlay network · Issue , An “overlay network” is a virtual network that runs on top of a different network. how to leverage on docker swarm and docker overlay networks to enhance Host networks are best when the network stack should not be isolated you need containers running on different Docker hosts to communicate, I am testing an application using multicast for the discovery. I created a Swarm cluster and a network create -d overlay swarm-net so the containers share the same LAN across the several Swarm agents hosts. The discovery seemed to not be working, so I installed tshark.
Using Docker Overlay Networks: Configuration Guide, Overlay container networking provides a powerful abstraction layer to simplify KV store, Docker Swarm allows DevOps to quickly deploy a multi-host docker how Docker uses various Linux tools to virtualize multi-host overlay networks. docker network ls NETWORK ID NAME DRIVER SCOPE cac91f9c60ff bridge Networking with docker containers is a very important featured of Docker. The feature allows users to define their own networks and connect containers to them. You can create a network on a single host or a network that spans across multiple hosts using docker network feature.
How Docker Swarm Container Networking Works, In the 1.9 Release of Docker, Multi-Host and Overlay networks became a GA and other IP traffic regardless of how many identical Containers are running on docker $(docker-machine config swarm-manager) network create --driver overlay Use overlay networks Estimated reading time: 11 minutes The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled.
Using Multi-Host and Overlay networking with Docker, In order to communicate with other containers on other hosts, they must The Overlay driver creates an overlay network that may span over Traffic destined for PODs on other hosts are forwarded to the container overlay network. The container network logically spans all hosts in the cluster, i.e. it provides a common layer 3 network for connecting all PODs in the cluster. The container overlay network encapsulates POD traffic and forwards it to the host network. The host network
- I checked all required ports but couldn't find anything.. @bmitch
- Sounds like it's time to use netshoot.