Remove sensitive information from environment variables in postgres docker container

docker secrets without swarm
docker-compose postgres
docker-compose secrets
docker postgres username password

I want to make a postgres database image but don't want to expose password and username which are stored as environment variable when produced using docker-compose.yml file. Basically, I don't want anyone to exec into the container and find out the variables.

One way is to use docker-secrets, but I don't want to to use docker swarm because my containers would be running on a single host.

my docker-compose file -

    version: "3"
    services:
       db:
         image: postgres:10.0-alpine
      environment:
         POSTGRES_USER: 'user'
         POSTGRES_PASSWORD: 'pass'
         POSTGRES_DB: 'db'

Things I have tried -

1) unset the environment variable at the end of entrypoint-entrypoint.sh

        for f in /docker-entrypoint-initdb.d/*; do
            case "$f" in
            *.sh)     echo "$0: running $f"; . "$f" ;;
            *.sql)    echo "$0: running $f"; "${psql[@]}" -f "$f"; echo ;;
            *.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[@]}"; echo ;;
            *)        echo "$0: ignoring $f" ;;
            esac
            echo
        done
        unset POSTGRES_USER

nothing happened though. :(

2) init.sql inside docker-entrypoint-initdb.d, to create db, user and pass without using env. I shared the volume, as -

```
   volumes:
       - ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
```

and, on my host, inside docker-entrypoint-initdb.d, I saved an init.sql as -

CREATE DATABASE docker_db;CREATE USER docker_user with encrypted password 'pass';GRANT ALL PRIVILEGES ON DATABASE docker_db TO docker_user;

I moved inside the running container and this file was there but, no user or database was created as mentioned in the file.

I have been stuck on this for past two days, any help is much appreciated.


use args without values to build the image in your Dockerfile:

ARG PASSWORD 

and build it using

export PASSWORD="MYPASS" && docker build ...

in this way the ARG is not there when running the container

here is a complete example:

dockerfile:

FROM postgres:10.0-alpine

ARG my_user
ARG my_pass

Compose:

version: "3"
services:
       db:
         build:
           context: .
           args:
            - my_user
            - my_pass       
         environment:
           - POSTGRES_USER=${my_user}
           - POSTGRES_PASSWORD=${my_pass}
           - POSTGRES_DB=db

run it:

export my_user=test && export my_pass=test1cd && docker-compose up -d --build

now if you login to the container and try echo $my_pass you get an empty string

result :

docker exec -ti 3b631d907153 bash

bash-4.3# psql -U test db
psql (10.0)
Type "help" for help.

db=#

Manage sensitive data with Docker secrets, However, the secrets are explicitly removed when a container stops. Sets the environment variables MYSQL_PASSWORD_FILE and while allowing your container to read the information from a Docker-managed secret instead of being​  Bonita is an open-source business process management and workflow suite


I don't want anyone to exec into the container and find out the variables.

This isn't realistic and I wouldn't worry about it.

The fundamental problem here is that Docker doesn't especially have any access controls over which docker commands someone can run. If you can run any docker command at all then you have unrestricted root-level access to the host. For instance, you can

docker run --rm -it -v /:/host busybox vi /host/etc/sudoers
docker exec -it myapp_db env
docker inspect myapp_dbadmin
docker exec -it myapp_app cat ./db_config.yml
docker run --rm -it -v /:/host busybox cat /host/$PWD/db_config.yml

The easiest thing to do here would be to not let any users you don't trust have access to the system at all. (If you're in a cloud environment, two cloud instances of half the size generally have the same cost as one bigger instance, so partitioning users this way could be straightforward.)

In theory you could limit things by making sure docker access is restricted (for instance, behind sudo), passing credentials only in files, and making sure the corresponding host files also have appropriate file permissions so they can't be read. That generally involves moving this configuration out of the docker-compose.yml file. It's not a "usual" Docker configuration but it does address the problem that environment variables aren't really that secure.

Don't Embed Configuration or Secrets in Docker Images, Never embed configuration or secrets into a Docker image. Secrets), an external tool (Hashicorp Vault), or environment variables (for non-sensitive data). An example of this type of image is Postgres, Redis or NSQ. When you remove something in a Dockerfile (a secret, a file, source code, anything),  A the end of this tutorial, you will have a GraphQL API exposing data from a PostgreSQL database, both running locally on your machine in separate Docker containers. Architecture


For any other env variable, you can check @LinPy's answer. It smartly uses Docker Image Build-time Variables to override the values. But at least, I was unable to benefit in this case maybe because these were some "special" variables of postgres and was not able to override them(any explanation is welcome in the comment section).

So, now coming to the solution -

Problem - Don't want postgres' username/password to be visible as environment variable.

Solution - Don't specify them is compose's environment variable section.

Instead, make user/database using a script. Postgres' image runs an entrylevel script, which in turn looks for any .sh/.sql file in docker-entrypoint-initdb.d directory and run it.

Example,

directory structure -

-docker-compose.yml
-docker-entrypoint-initdb.d
  -init.sql

My docker-compose file -

version: "3"

services:
    db:
      image: postgres:10.0-alpine
      ports:
        - 8765:5432
      volumes:
        - ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d

init.sql file

CREATE DATABASE docker_db;CREATE USER docker_user with encrypted password 'pass';GRANT ALL PRIVILEGES ON DATABASE docker_db TO docker_user;
docker-compose up
docker exec -it container-id bash
psql -U docker_user docker_db
psql (10.0)
Type "help" for help.

docker_db=>

Now, delete init.sql from your host, as it is shared volume, it will also be deleted from your container.

bitnami/bitnami-docker-postgresql: Bitnami PostgreSQL , The recommended way to get the Bitnami PostgreSQL Docker Image is to pull If you remove the container all your data and configurations will be lost, and the Passing the POSTGRESQL_PASSWORD environment variable when running the image and any relevant output you saw (masking any sensitive information)​  The PostgreSQL object-relational database system provides reliability and data integrity.


update

Before using docker secrets in Compose, take into consideration this SO question and that github answer.


You can use docker secrets in Compose.

As stated in the relevant section of the docker postgresql docs:

As an alternative to passing sensitive information via environment variables, _FILE may be appended to some of the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in /run/secrets/<secret_name> files.

Therefore your compose file can read:

version: "3"
services:
  db:
    image: postgres:10.0-alpine
  environment:
    POSTGRES_USER_FILE: /run/secrets/user
    POSTGRES_PASSWORD_FILE: /run/secrets/pass
    POSTGRES_DB_FILE: /run/secrets/db
  secrets:
    - user
    - pass
    - db

secrets:
  user:
    file: user.txt
  pass:
    file: pass.txt
  db: 
    file: db.txt

How to Give Developers Access Without Giving Away Too Much , For this to happen, some sensitive information is required by the application to initiate the connection. We use the dotenv package to load environment variables from a file. docker run \ --env-file .env \ --name postgres \ -d \ -p 5432:​5432 Let's start by removing all secrets from our application:. The official Postgres Docker image supports a few environment variables. One of them, POSTGRES_DB, is responsible for holding a database name. However, if you want your container to include more than one database (e.g app and app_test), you have to reach for different solutions.


Using Environment Variables, A list of supported environment variables in CircleCI 2.0. To change the value of an environment variable, delete the current variable and add it again with the new version: 2.1 jobs: # basic units of work in a run build: docker: # use the Docker You can find this sensitive information in Project Settings and Contexts. First, remove the previous container. $ docker stop postgresql && docker rm postgresql. Launch a container and use /postgresdata as the host directory to mount as a volume mount, which will be mounted in the container in /var/lib/postgresql/data, which is the default location where Postgres stores it’s data.


Images PostgreSQL, The PostgreSQL image uses several environment variables which are easy to miss. As an alternative to passing sensitive information via environment variables, UTF-8 ---> Running in 18aa6161e381 Removing intermediate container  Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.


Containerizing a Rails Application for Development with Docker , This tutorial will show you how to set up a development environment a working shark information application running on Docker containers: If we do not disable this gem, we will see persistent error messages With your application configured to work with PostgreSQL and your environment variables  I prefer to use Docker containers for running a PostgreSQL database. Spin up the container, develop the app, then tear down the container. The Postgres database doesn't clutter up my local system, and I can easily set it up on a different machine.