What is the SIZE of a Docker container?

I recently was  asked, if it is possible to tell the size of a container and, speaking of disk-space, what are the costs when running multiple instances of a container.

Let’s take the IBM Domino server from my previous post as an example.

You can get the SIZE of a container with the following command:

# docker ps -as -f “name=901FP9”
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
5f37c4d6a826 eknori/domino:domino_9_0_1_FP_9 “/docker-entrypoint.s” 2 hours ago Exited (137) 6 seconds ago 901FP9 0 B (virtual 3.296 GB)

We get a SIZE of 0 B (virtual 3.296 GB) as a result. Virtual size? What is that?

Let me try and explain:
When starting a container, the image that the container is started from is mounted read-only. On top of that, a writable layer is mounted, in which any changes made to the container are written.
The read-only layers of an image can be shared between any container that is started from the same image, whereas the “writable” layer is unique per container (because: you don’t want changes made in container “a” to appear in container “b” )
Back to the docker ps -s output;

  • The “size” information shows the amount of data (on disk) that is used for the writable layer of each container
  • The “virtual size” is the amount of disk-space used for the read-only image data used by the container.

So, with a 0 B container size, it does not make any difference, if we start 1 or 100 containers.

Be aware that the size shown does not include all disk space used for a container. Things that are not included currently are;

  1. volumes used by the container
  2. disk space used for the container’s configuration files (hostconfig.json, config.v2.json, hosts, hostname, resolv.conf) – although these files are small
  3. memory written to disk (if swapping is enabled)
  4. checkpoints (if you’re using the experimental checkpoint/restore feature)
  5. disk space used for log-files (if you use the json-file logging driver) – which can be quite a bit if your container generates a lot of logs, and log-rotation (max-file / max-size logging options) is not configured

So, let’s see what we have to add to the 0 B to get the overall size of our container.

We are using a volume “domino_data” for our Domino server . To get some information about this volume (1) type

# docker volume inspect domino_data
[
{
“Name”: “domino_data”,
“Driver”: “local”,
“Mountpoint”: “/var/lib/docker/volumes/domino_data/_data”,
“Labels”: {},
“Scope”: “local”
}
]

This gives us the physical location of that volume. Now we can get the size of the volume, summing up the size of all files in the volume.

# du -hs /var/lib/docker/volumes/domino_data/_data
1.1G /var/lib/docker/volumes/domino_data/_data

To get the size of the container configuration (2), we need to find the location for our container.

# ls /var/lib/docker/containers/
5f37c4d6a8267246bbaff668b3437f121b0fe375d8319364bf7eb10f50d72c69

Now we have the long Id for our CONTAINER ID. Next type

# du -hs 5f37c4d6a8267246bbaff668b3437f121b0fe375d8319364bf7eb10f50d72c69/
160K 5f37c4d6a8267246bbaff668b3437f121b0fe375d8319364bf7eb10f50d72c69/

Now do the math yourself.  x = (0B + 1.1GB + 160kB ) * n .

I leave it up to you to find out the other sizes ( 3 – 4 ) .

Sizes may vary and will change during runtime; but I assume that you got the idea.  Important to know is that all containers that are using the same image in the FROM command in a Dockerfile share this (readonly) image, so there is only one copy of it on disk.