Docker container logs taking all my disk space

docker-compose limit log size
docker container log files
docker run logging
docker clear logs
docker-compose logging
docker compose clear logs for container
container logging

I am running a container on a VM. My container is writing logs by default to /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log file until the disk is full.

Currently, I have to delete manually this file to avoid the disk to be full. I read that in Docker 1.8 there will be a parameter to rotate the logs. What would you recommend as the current workaround?

Docker 1.8 has been released with a log rotation option. Adding:

--log-opt max-size=50m 

when the container is launched does the trick. You can learn more at:

Docker Tip #69: Avoid Running Out of Disk Space from Container , By default, Docker's container log files will consume your entire disk unless need to panic about Docker container logs taking up all of your disk space. of all your containers, and writes them in files using the JSON format. There are several options on how to limit docker diskspace, I'd start by limiting/rotating the logs: Docker container logs taking all my disk space. E.g. if you have a recent docker version, you can start it with an --log-opt max-size=50m option per container. Also - if you've got old, unused containers, you can consider having a look at the docker logs which are located at /var/lib/docker/containers/*/*-json.log

CAUTION: This is for docker-compose version 2 only


version: '2'
    container_name: db
    image: mysql:5.7
      - 3306:3306
        max-size: 50m

Set container log limits when Docker is filling up /var/lib/docker with , Docker is configured to use a thin pool logical volume for storage but is still space available but my root file system is filling up with most space taken up in /​var/lib/docker . This will remove all logs for the given container  Docker 1.8 has been released with a log rotation option. Adding: --log-opt max-size=50m when the container is launched does the trick.

Caution: this post relates to docker versions < 1.8 (which don't have the --log-opt option)

Why don't you use logrotate (which also supports compression)?

/var/lib/docker/containers/*/*-json.log {
rotate 48

Configure it either directly on your CoreOs Node or deploy a container (e.g. which mounts /var/lib/docker to rotate the logs.

Rotating Docker Logs, The container is running, so why would it not be able to read/write to the disk unless…the disk is full? docker run --log-opt max-size=10m --log-opt max-file=​5 my-app:latest If you are using docker-compose, your options should look like: I drink too much tea and I love learning all things software. First of all, docker by default doesn’t care about using the disk space. Most of the commands leave a trace behind, make a copy of something or replace an item without removing the previous version. Let’s take a look at the most common ones: docker pull and docker build create new docker images. Each layer is cached and uses aufs, so it decreases disk usage by itself, but it’s also leaving previous versions / layers dangling.

Pass log options while running a container. An example will be as follows

sudo docker run -ti --name visruth-cv-container  --log-opt max-size=5m --log-opt max-file=10 ubuntu /bin/bash

where --log-opt max-size=5m specifies the maximum log file size to be 5MB and --log-opt max-file=10 specifies the maximum number of files for rotation.

docker system df, A more detailed view can be requested using the -v, --verbose flag: MB 4.797 MB 0 B 1 Containers space usage: CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED Network information is not shown because it does not consume disk space. Copyright © 2013-2020 Docker Inc. All rights reserved. TL;DR Storage will be shared between all containers and local volumes unless you are using the devicemapper storage driver or have set a limit via docker run --storage-opt size=X when running on the zfs or btrfs drivers. Docker 1.13+ also supports a quota size with overlay2 on an xfs backed file system.

[This answer covers current versions of docker for those coming across the question long after it was asked.]

To set the default log limits for all newly created containers, you can add the following in /etc/docker/daemon.json:

  "log-driver": "json-file",
  "log-opts": {"max-size": "10m", "max-file": "3"}

Then reload docker with systemctl reload docker if you are using systemd (otherwise use the appropriate restart command for your install).

You can also switch to the local logging driver with a similar file:

  "log-driver": "local",
  "log-opts": {"max-size": "10m", "max-file": "3"}

The local logging driver stores the log contents in an internal format (I believe protobufs) so you will get more log contents in the same size logfile (or take less file space for the same logs). The downside of the local driver is external tools like log forwarders, may not be able to parse the raw logs. Be aware the docker logs only works when the log driver is set to json-file, local, or journald.

The max-size is a limit on the docker log file, so it includes the json or local log formatting overhead. And the max-file is the number of logfiles docker will maintain. After the size limit is reached on one file, the logs are rotated, and the oldest logs are deleted when you exceed max-file.

For more details, docker has documentation on all the drivers at:

I also have a presentation covering this topic. Use P to see the presenter notes:

Log files grow unconditionally, will eat all the disk space, Log files grow unconditionally, will eat all the disk space Log files under ~/​Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/ grow unconditionally It takes me a long time to find this root cause. The docker logs command shows information logged by a running container. The docker service logs command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container’s endpoint command. By default,

Disk utilization in Docker for Mac, The Disk image size section contains a and therefore, all containers and images will be  You’ve been using Docker for quite a while now and it looks like it’s eating your entire disk space. Wasn’t docker supposed to be fixing all my problems?Luckily there is a solution that arrived in Docker 1.13 and with a couple of simple commands to get an overview on your entire Docker environment and clean it all up.

Docker container filesystem running out of space - how to find , Looking inside this directory, it's easy to see that the container's "console" log file (stdout/stderr) is responsible for eating all this disk space:. I have a two running docker containers: postgresql and redis with uptime about 3 weeks. What I noticed, my free disk space is infinitely decreases everyday. But Postgresql database size is about 18MB, and redis .rdb is 550kB pydf output:

How to setup log rotation for a Docker container, If this JSON log file takes up a significant amount of the disk, we can you never know when the container logs take up all the disk space. Docker container out of disk space 26 Jun 2015. Are programs in your Docker container complaining of no free space? Does your host have loads of space? And your container does too? It could be inode exhaustion! The symptoms. All of a sudden my CI agent (which is in a Docker container) stopped running builds. Everything went red. The failures

  • As a current workaround, you can turn off the logs completely if it's not of importance to you. This can be done by starting docker daemon with --log-driver=none. If you want to disable logs only for specific containers, you can start them with --log-driver=none in the docker run command. Another option could be to mount an external storage to /var/lib/docker. Like an NFS share or something which has more storage capacity than the host in question.
  • Or use the journald log driver, and have journald worry about log rotation.
  • @Dharmit where is it located on CoreOs?
  • @larsks How can I do that on CoreOS? It seems that journald is installed and generating logs in /var/log/journal but I have also logs in /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log
  • @poiuytrez where is what located? If you're willing to start Docker daemon with suggested option, /usr/lib/systemd/system/docker.service might be the file. I am not sure on CoreOS. On CentOS, that's the location. As for other question is concerned, you need to change Docker daemon's options to use journald as logging driver. Then it'll log containers using journald and not log to /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log. @larsks correct me if I am missing something.
  • Just to note, this seems to be only available for JSON and fluentd logs.
  • Just a quick note that the versioning scheme changed after Docker 1.13. If you have a version number like 17.03.0-ce that means you are on the new post-1.13 versioning scheme.
  • Restarting my service with this logging section works but it seems to have no effects, the json log file just keeps growing like before...
  • For version 3 see
  • I don't think this is a good solution. You need to bounce whatever daemon is running to stop writing to the old log and start writing to the new log. Otherwise, Linux kernel will continue to reference old log file in memory(in contrast to disk). Logrotate can do this with normal daemons, but bouncing the Docker or the container causes downtime.
  • Good point, I agree. This answer was provided in the early days of Docker (tm), meanwhile the built features (like mentioned in the other answer) should do the job.
  • This has some issues, it rotates logs but still disk consumption shows similar. I was using v5.0.2 then had to upgrade it to latest with script to use --log-opt option with docker create or run command.