Mount directory in Container and share with Host

docker mount container directory to host
docker mount directory
docker mount directory to running container
docker shared folder windows
docker bind mount permissions
docker --mount
dockerfile volume
docker mac mount local folder

I thought I understood the docs, but maybe I didn't. I was under the impression that the -v /HOST/PATH:/CONTAINER/PATH flag is bi-directional. If we have file or directories in the container, they would be mirrored on the host giving us a way to retain the directories and files even after removing a docker container.

In the official MySQL docker images, this works. The /var/lib/mysql can be bound to the host and survive restarts and replacement of container while maintaining the data on the host.

I wrote a docker file for sphinxsearch-2.2.9 just as a practice and for the sake of learning and understanding, here it is:

FROM debian

ENV SPHINX_VERSION=2.2.9-release

RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get install -yqq\
    build-essential\
    wget\
    curl\
    mysql-client\
    libmysql++-dev\
    libmysqlclient15-dev\
    checkinstall

RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}.tar.gz && tar xzvf sphinx-${SPHINX_VERSION}.tar.gz && rm sphinx-${SPHINX_VERSION}.tar.gz

RUN cd sphinx-${SPHINX_VERSION} && ./configure --prefix=/usr/local/sphinx

EXPOSE 9306 9312

RUN cd sphinx-${SPHINX_VERSION} && make

RUN cd sphinx-${SPHINX_VERSION} && make install

RUN rm -rf sphinx-${SPHINX_VERSION}

VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var

Very simple and easy to get your head wrapped around while learning. I am assigning the /etc & /var directories from the sphinx build to the VOLUME command thinking that it will allow me to do something like -v ~/dev/sphinx/etc:/usr/local/sphinx/etc -v ~/dev/sphinx/var:/usr/local/sphinx/var, but it's not, instead it's overwriting the directories inside the container and leaving them blank. When i remove the -v flags and create the container, the directories have the expected files and they are not overwritten.

This is what I run to create the docker file after navigating to the directory that it's in: docker build -t sphinxsearch .

And once I have that created, I do the following to create a container based on that image: docker run -it --hostname some-sphinx --name some-sphinx --volume ~/dev/docker/some-sphinx/etc:/usr/local/sphinx/etc -d sphinxsearch

I really would appreciate any help and insight on how to get this to work. I looked at the MySQL images and don't see anything magical that they did to make the directory bindable, they used VOLUME.

Thank you in advance.


After countless hours of research, I decided to extend my image with the following Dockerfile:

FROM sphinxsearch

VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var

RUN mkdir -p /sphinx && cd /sphinx && cp -avr /usr/local/sphinx/etc . && cp -avr /usr/local/sphinx/var .

ADD docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh

ENTRYPOINT ["/docker-entrypoint.sh"]

Extending it benefited it me in that I didn't have to build the entire image from scratch as I was testing, and only building the parts that were relevant.

I created an ENTRYPOINT to execute a bash script that would copy the files back to the required destination for sphinx to run properly, here is that code:

#!/bin/sh
set -e

target=/usr/local/sphinx/etc

# check if directory exists
if [ -d "$target" ]; then
    # check if we have files
    if find "$target" -mindepth 1 -print -quit | grep -q .; then
        # no files don't do anything
        # we may use this if condition for something else later
        echo not empty, don\'t do anything...
    else
        # we don't have any files, let's copy the
        # files from etc and var to the right locations
        cp -avr /sphinx/etc/* /usr/local/sphinx/etc && cp -avr /sphinx/var/* /usr/local/sphinx/var
    fi
else
    # directory doesn't exist, we will have to do something here
    echo need to creates the directory...
fi

exec "$@"

Having access to the /etc & /var directories on the host allows me to adjust the files while keeping them preserved on the host in between restarts and so forth... I also have the data saved on the host which should survive the restarts.

I know it's a debated topic on data containers vs. storing on the host, at this moment I am leaning towards storing on the host, but will try the other method later. If anyone has any tips, advice, etc... to improve what I have or a better way, please share.

Thank you @h3nrik for suggestions and for offering help!

How To Share Data Between the Docker Container and the Host , Remove that, and you're naming the volume. -v /path:/path/in/container mounts the host directory, /path at the /path/in  The trick is to map the uid and gid of the host user to the uid and gid of the user inside the container. Suppose I am going to mount /home/breakds/projects to the exact same location in the container. The outside directory is owned by the user breakds, whose uid and gid are 1000.


Mounting container directories to the host is against the docker concepts. That would break the process/resources encapsulation principle.

The other way around - mounting a host folder into a container - is possible. But I would rather suggest to use volume containers, instead.

Mount Folder to Docker* Container, To import models and datasets that exist on your host faster, you can mount your directory with data to the Docker container with Workbench. To mount a  Add a shared host directory to an LXC/LXD container (read-write mode) By default, the root user is not allowed to modify files inside containers from a host. It is a security feature of LXD. In other words, you need to remap your user ID if you need read-write access for mounted folders.


because mysql do init After the mapping,so before mapping there have no data at /var/lib/mysql.

so if you have data before start container, the -v action will override your data.

see entrypoint.sh

Use bind mounts, The file or directory does not need to exist on the Docker host already. The second field is the path where the file or directory is mounted in the container. The third field is May be one of rprivate , private , rshared , shared , rslave , slave . To enable this and to work rapidly, it is important that you are able to map a directory from your local system, read that as Windows host machine, to your docker container. This is done via volume mounting and this post is a step by step guide to validate that it works.


Mount directory in Container and share with Host, After countless hours of research, I decided to extend my image with the following Dockerfile: FROM sphinxsearch VOLUME  Docker volumes can be used to share files between a host system and the Docker container. For example, let’s say you wanted to use the official Docker Nginx image and keep a permanent copy of Nginx’s log files to analyze later. By default, the nginx Docker image will log to the /var/log/nginx directory inside the Docker Nginx container. Normally it’s not reachable from the host filesystem.


Sharing Windows folders with containers, Once the Shared Drives option is configured, you can mount any folder on shared drives with the "-v" (volume) flag. -v <host-directory>:<container-path>. To be able to save data (or share data between containers), you have to take advantage of volumes. A Docker volume is a directory (or files) that exists on the host file system (outside the Union File System).


Docker / Windows Container: how to mount a host folder as data , Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory. So, you can mount files or directories on OS X using. On Windows, mount  Mounting the current working directory into a Docker container. How to Mount Volumes to Container on Docker Tutorial [HD] How to mount nfs share inside docker container with centos base image?