Docker

From ArchWiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Docker is a utility to pack, ship and run any application as a lightweight container.

Installation

Install the docker package or, for the development version, the docker-gitAUR package. Next start and enable docker.service and verify operation:

# docker info

Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker's bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. You can also try to deconflict the networks (see solutions [1] or [2]).

Next, verify that you can run containers. The following command downloads the latest Arch Linux image and uses it to run a Hello World program within a container:

# docker run -it --rm archlinux bash -c "echo hello world"

If you want to be able to run the docker CLI command as a non-root user, add your user to the docker user group, re-login, and restart docker.service.

Warning: Anyone added to the docker group is root equivalent because they can use the docker run --privileged command to start containers with root privileges. For more information see [3] and [4].

Usage

Docker consists of multiple parts:

  • The Docker daemon (sometimes also called the Docker Engine), which is a process which runs as docker.service. It serves the Docker API and manages Docker containers.
  • The docker CLI command, which allows users to interact with the Docker API via the command line and control the Docker daemon.
  • Docker containers, which are namespaced processes that are started and managed by the Docker daemon as requested through the Docker API.

Typically, users use Docker by running docker CLI commands, which in turn request the Docker daemon to perform actions which in turn result in management of Docker containers. Understanding the relationship between the client (docker), server (docker.service) and containers is important to successfully administering Docker.

Note that if the Docker daemon stops or restarts, all currently running Docker containers are also stopped or restarted.

Also note that it is possible to send requests to the Docker API and control the Docker daemon without the use of the docker CLI command. See the Docker API developer documentation for more information.

See the Docker Getting Started guide for more usage documentation.

Configuration

The Docker daemon can be configured either through a configuration file at /etc/docker/daemon.json or by adding command line flags to the docker.service systemd unit. According to the Docker official documentation, the configuration file approach is preferred. If you wish to use the command line flags instead, use systemd drop-in files to override the ExecStart directive in docker.service.

For more information about options in daemon.json see dockerd documentation.

Storage driver

The storage driver controls how images and containers are stored and managed on your Docker host. The default overlay2 driver has good performance and is a good choice for all modern Linux kernels and filesystems. There are a few legacy drivers such as devicemapper and aufs which were intended for compatibility with older Linux kernels, but these have no advantages over overlay2 on Arch Linux.

Users of btrfs or ZFS may use the btrfs or zfs drivers, each of which take advantage of the unique features of these filesystems. See the btrfs driver and zfs driver documentation for more information and step-by-step instructions.

Daemon socket

By default, the Docker daemon serves the Docker API using a Unix socket at /var/run/docker.sock. This is an appropriate option for most use cases.

It is possible to configure the Daemon to additionally listen on a TCP socket, which can allow remote Docker API access from other computers. This can be useful for allowing docker commands on a host machine to access the Docker daemon on a Linux virtual machine, such as an Arch virtual machine on a Windows or macOS system.

Warning: The Docker API is unencrypted and unauthenticated by default. Remote TCP access to the Docker daemon is equivalent to unsecured remote root access unless TLS encryption and authorization is also enabled, either with an authenticating HTTP reverse proxy or with the appropriate additional Docker configuration. In general, enabling Docker API TCP sockets should be considered a high security risk.

Note that the default docker.service file sets the -H flag by default, and Docker will not start if an option is present in both the flags and /etc/docker/daemon.json file. Therefore, the simplest way to change the socket settings is with a drop-in file, such as the following which adds a TCP socket on port 4243:

/etc/systemd/system/docker.service.d/execstart.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:4243

Reload the systemd daemon and restart docker.service to apply changes.

HTTP Proxies

There are two parts to configuring Docker to use an HTTP proxy: Configuring the Docker daemon and configuring Docker containers.

Docker daemon proxy configuration

See Docker documentation on configuring a systemd drop-in unit to configure HTTP proxies.

Docker container proxy configuration

See Docker documentation on configuring proxies for information on how to automatically configure proxies for all containers created using the docker CLI.

Configuring DNS

See Docker's DNS documentation for the documented behavior of DNS within Docker containers and information on customizing DNS configuration. In most cases, the resolvers configured on the host are also configured in the container.

Most DNS resolvers hosted on 127.0.0.0/8 are not supported due to conflicts between the container and host network namespaces. Such resolvers are removed from the container's /etc/resolv.conf. If this would result in an empty /etc/resolv.conf, Google DNS is used instead.

Additionally, a special case is handled if 127.0.0.53 is the only configured nameserver. In this case, Docker assumes the resolver is systemd-resolved and uses the upstream DNS resolvers from /run/systemd/resolve/resolv.conf.

If you are using a service such as dnsmasq to provide a local resolver, consider adding a virtual interface with a link local IP address in the 169.254.0.0/16 block for dnsmasq to bind to instead of 127.0.0.1 to avoid the network namespace conflict.

Images location

By default, docker images are located at /var/lib/docker. They can be moved to other partitions, e.g. if you wish to use a dedicated partition or disk for your images. In this example, we will move the images to /mnt/docker.

First, stop docker.service, which will also stop all currently running containers and unmount any running images. You may then move the images from /var/lib/docker to the target destination, e.g. cp -r /var/lib/docker /mnt/docker.

Configure data-root in /etc/docker/daemon.json:

/etc/docker/daemon.json
{
  "data-root": "/mnt/docker"
}

Restart docker.service to apply changes.

Insecure registries

If you decide to use a self signed certificate for your private registries, Docker will refuse to use it until you declare that you trust it. For example, to allow images from a registry hosted at myregistry.example.com:8443, configure insecure-registries in the /etc/docker/daemon.json file:

/etc/docker/daemon.json
{
  "insecure-registries": [
    "my.registry.example.com:8443"
  ]
}

Restart docker.service to apply changes.

IPv6

In order to enable IPv6 support in Docker, you will need to do a few things. See [5] and [6] for details.

Firstly, enable the ipv6 setting in /etc/docker/daemon.json and set a specific IPv6 subnet. In this case, we will use the private fd00::/80 subnet. Make sure to use a subnet at least 80 bits as this allows a container's IPv6 to end with the container's MAC address which allows you to mitigate NDP neighbor cache invalidation issues.

/etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80"
}

Restart docker.service to apply changes.

Finally, to let containers access the host network, you need to resolve routing issues arising from the usage of a private IPv6 subnet. Add the IPv6 NAT in order to actually get some traffic:

# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE

Now Docker should be properly IPv6 enabled. To test it, you can run:

# docker run curlimages/curl curl -v -6 archlinux.org

If you use firewalld, you can add the rule like this:

# firewall-cmd --zone=public --add-rich-rule='rule family="ipv6" destination not address="fd00::1/80" source address="fd00::/80" masquerade'

If you use ufw, you need to first enable ipv6 forwarding following Uncomplicated Firewall#Forward policy. Next you need to edit /etc/default/ufw and uncomment the following lines

/etc/ufw/sysctl.conf
net/ipv6/conf/default/forwarding=1
net/ipv6/conf/all/forwarding=1

Then you can add the iptables rule:

# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE

It should be noted that, for docker containers created with docker-compose, you may need to set enable_ipv6: true in the networks part for the corresponding network. Besides, you may need to configure the IPv6 subnet. See [7] for details.

User namespace isolation

By default, processes in Docker containers run within the same user namespace as the main dockerd daemon, i.e. containers are not isolated by the user_namespaces(7) feature. This allows the process within the container to access configured resources on the host according to Users and groups#Permissions and ownership. This maximizes compatibility, but poses a security risk if a container privilege escalation or breakout vulnerability is discovered that allows the container to access unintended resources on the host. (One such vulnerability was published and patched in February 2019.)

The impact of such a vulnerability can be reduced by enabling user namespace isolation. This runs each container in a separate user namespace and maps the UIDs and GIDs inside that user namespace to a different (typically unprivileged) UID/GID range on the host. Note that in the Docker implementation, user namespaces for all containers are mapped to the same UID/GID range on the host, otherwise sharing volumes between multiple containers would not be possible.

Note:
  • The main dockerd daemon still runs as root on the host. Running Docker in rootless mode is a different feature.
  • Processes in the container are started as the user defined in the USER directive in the Dockerfile used to build the image of the container.
  • Enabling user namespace isolation has several limitations. Also, Kubernetes currently does not work with this feature.
  • Enabling user namespace isolation effectively masks existing image and container layers, as well as other Docker objects in /var/lib/docker/, because Docker needs to adjust the ownership of these resources. The upstream documentation recommends to enable this feature on a new Docker installation rather than an existing one.

Configure userns-remap in /etc/docker/daemon.json. default is a special value that will automatically create a user and group named dockremap for use with remapping.

/etc/docker/daemon.json
{
  "userns-remap": "default"
}

Configure /etc/subuid and /etc/subgid with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group. This example allocates a range of 65536 UIDs and GIDs starting at 165536 to the dockremap user and group.

/etc/subuid
dockremap:165536:65536
/etc/subgid
dockremap:165536:65536

Restart docker.service to apply changes.

After applying this change, all containers will run in an isolated user namespace by default. The remapping may be partially disabled on specific containers passing the --userns=host flag to the docker command. See [8] for details.

Docker rootless

Warning: Docker rootless relies on the unprivileged user namespace usage (CONFIG_USER_NS_UNPRIVILEGED) which has some serious security implications, see Security#Sandboxing applications for details.

Install the docker-rootless-extras-binAUR package to run docker in rootless mode (that is, as a regular user instead of as root).

Configure /etc/subuid and /etc/subgid with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group.

/etc/subuid
your_username:165536:65536
/etc/subgid
your_username:165536:65536

Enable the socket (this will result in docker being started using systemd's socket activation):

$ systemctl --user enable --now docker.socket

Finally set docker socket environment variable:

$ export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock

Images

Arch Linux

The following command pulls the archlinux x86_64 image. This is a stripped down version of Arch core without network, etc.

# docker pull archlinux

See also README.md.

For a full Arch base, clone the repo from above and build your own image.

$ git clone https://gitlab.archlinux.org/archlinux/archlinux-docker.git

Make sure that the devtools, fakechroot and fakeroot packages are installed.

To build the base image:

$ make image-base

Alpine Linux

Alpine Linux is a popular choice for small container images, especially for software compiled as static binaries. The following command pulls the latest Alpine Linux image:

# docker pull alpine

Alpine Linux uses the musl libc implementation instead of the glibc libc implementation used by most Linux distributions. Because Arch Linux uses glibc, there are a number of functional differences between an Arch Linux host and an Alpine Linux container that can impact the performance and correctness of software. A list of these differences is documented here.

Note that dynamically linked software built on Arch Linux (or any other system using glibc) may have bugs and performance problems when run on Alpine Linux (or any other system using a different libc). See [9], [10] and [11] for examples.

CentOS

The following command pulls the latest centos image:

# docker pull centos

See the Docker Hub page for a full list of available tags for each CentOS release.

Debian

The following command pulls the latest debian image:

# docker pull debian

See the Docker Hub page for a full list of available tags, including both standard and slim versions for each Debian release.

Distroless

Google maintains distroless images for several popular programming languages such as Java, Python, Go, Node.js, .NET Core and Rust. These images contain only the programming language runtime without any OS related files, resulting in very small images for packaging software.

See the GitHub README for a list of images and instructions on their use.

Run GPU accelerated Docker containers with NVIDIA GPUs

Note: libnvidia-containerAUR has no support for cgroups v2 [12] [13]. You need to set the systemd.unified_cgroup_hierarchy=false kernel parameter and set no-cgroups = false in /etc/nvidia-container-runtime/config.toml if you are using systemd v248 or higher.

With NVIDIA Container Toolkit (recommended)

Starting from Docker version 19.03, NVIDIA GPUs are natively supported as Docker devices. NVIDIA Container Toolkit is the recommended way of running containers that leverage NVIDIA GPUs.

Install the nvidia-container-toolkitAUR package. Next, restart docker. You can now run containers that make use of NVIDIA GPUs using the --gpus option:

# docker run --gpus all nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi

Specify how many GPUs are enabled inside a container:

# docker run --gpus 2 nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi

Specify which GPUs to use:

# docker run --gpus '"device=1,2"' nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi

or

# docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi

Specify a capability (graphics, compute, ...) for the container (though this is rarely if ever used this way):

# docker run --gpus all,capabilities=utility nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi

For more information see README.md and Wiki.

With NVIDIA Container Runtime

Install the nvidia-container-runtimeAUR package. Next, register the NVIDIA runtime by editing /etc/docker/daemon.json

/etc/docker/daemon.json
{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  }
}

and then restart docker.

The runtime can also be registered via a command line option to dockerd:

# /usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime

Afterwards GPU accelerated containers can be started with

# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi

or (required Docker version 19.03 or higher)

# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi

See also README.md.

With nvidia-docker (deprecated)

nvidia-docker is a wrapper around NVIDIA Container Runtime which registers the NVIDIA runtime by default and provides the nvidia-docker command.

To use nvidia-docker, install the nvidia-dockerAUR package and then restart docker. Containers with NVIDIA GPU support can then be run using any of the following methods:

# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
# nvidia-docker run nvidia/cuda:9.0-base nvidia-smi

or (required Docker version 19.03 or higher)

# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
Note: nvidia-docker is a legacy method for running NVIDIA GPU accelerated containers used prior to Docker 19.03 and has been deprecated. If you are using Docker version 19.03 or higher, it is recommended to use NVIDIA Container Toolkit instead.

Arch Linux image with CUDA

You can use the following Dockerfile to build a custom Arch Linux image with CUDA. It uses the Dockerfile frontend syntax 1.2 to cache pacman packages on the host. The DOCKER_BUILDKIT=1 environment variable must be set on the client before building the Docker image.

Dockerfile
# syntax = docker/dockerfile:1.2

FROM archlinux

# use faster mirror to speed up the image build
RUN echo 'Server = https://mirror.pkgbuild.com/$repo/os/$arch' > /etc/pacman.d/mirrorlist

# install packages
RUN --mount=type=cache,sharing=locked,target=/var/cache/pacman \
    pacman -Suy --noconfirm --needed base base-devel cuda

# configure nvidia container runtime
# https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility

Useful tips

To grab the IP address of a running container:

$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id> 
172.17.0.37

For each running container, the name and corresponding IP address can be listed for use in /etc/hosts:

#!/usr/bin/env sh
for ID in $(docker ps -q | awk '{print $1}'); do
    IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID")
    NAME=$(docker ps | grep "$ID" | awk '{print $NF}')
    printf "%s %s\n" "$IP" "$NAME"
done

Using buildx for cross-compiling

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: Add a link to the upstream documentation of the feature. (Discuss in Talk:Docker)

The recent docker supports cross-compiling for different architecture.

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/s390x, linux/arm/v7, linux/arm/v6

It uses binfmt-qemu-staticAUR and qemu-user-staticAUR, see QEMU#Chrooting_into_arm/arm64_environment_from_x86_64 for installation details.

Remove Docker and images

In case you want to remove Docker entirely you can do this by following the steps below:

Note: Do not just copy paste those commands without making sure you know what you are doing.

Check for running containers:

# docker ps

List all containers running on the host for deletion:

# docker ps -a

Stop a running container:

# docker stop <CONTAINER ID>

Killing still running containers:

# docker kill <CONTAINER ID>

Delete containers listed by ID:

# docker rm <CONTAINER ID>

List all Docker images:

# docker images

Delete images by ID:

# docker rmi <IMAGE ID>

Delete all images, containers, volumes, and networks that are not associated with a container (dangling):

# docker system prune

To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:

# docker system prune -a

Delete all Docker data (purge directory):

# rm -R /var/lib/docker

Troubleshooting

docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd

Docker attempts to enable IP forwarding globally, but by default systemd-networkd overrides the global sysctl setting for each defined network profile. Set IPForward=yes in the network profile. See Internet sharing#Enable packet forwarding for details.

When systemd-networkd tries to manage the network interfaces created by Docker, this can lead to connectivity issues. Try disabling management of those interfaces. I.e. networkctl list should report unmanaged in the SETUP column for all networks created by Docker.

Note:
  • You may need to restart docker.service each time you restart systemd-networkd.service or iptables.service.
  • Also be aware that nftables may block docker connections by default. Use nft list ruleset to check for blocking rules. nft flush chain inet filter forward removes all forwarding rules temporarily. Edit /etc/nftables.conf to make changes permanent. Remember to restart nftables.service to reload rules from the config file. See [14] for details about nftables support in Docker.

Default number of allowed processes/threads too low

If you run into error messages like

# e.g. Java
java.lang.OutOfMemoryError: unable to create new native thread
# e.g. C, bash, ...
fork failed: Resource temporarily unavailable

then you might need to adjust the number of processes allowed by systemd. The default is 500 (see system.conf), which is pretty small for running several docker containers. Edit the docker.service with the following snippet:

# systemctl edit docker.service
[Service]
TasksMax=infinity

Error initializing graphdriver: devmapper

If systemctl fails to start docker and provides an error:

Error starting daemon: error initializing graphdriver: devmapper: Device docker-8:2-915035-pool is not a thin pool

Then, try the following steps to resolve the error. Stop the service, back up /var/lib/docker/ (if desired), remove the contents of /var/lib/docker/, and try to start the service. See the open GitHub issue for details.

Failed to create some/path/to/file: No space left on device

If you are getting an error message like this:

ERROR: Failed to create some/path/to/file: No space left on device

when building or running a Docker image, even though you do have enough disk space available, make sure:

  • Tmpfs is disabled or has enough memory allocation. Docker might be trying to write files into /tmp but fails due to restrictions in memory usage and not disk space.
  • If you are using XFS, you might want to remove the noquota mount option from the relevant entries in /etc/fstab (usually where /tmp and/or /var/lib/docker reside). Refer to Disk quota for more information, especially if you plan on using and resizing overlay2 Docker storage driver.
  • XFS quota mount options (uquota, gquota, prjquota, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a kernel parameter rootflags=. Subsequently, it should not be listed among mount options in /etc/fstab for the root (/) filesystem.
Note: There are some differences of XFS Quota compared to standard Linux Disk quota, [15] may be worth reading.

Docker-machine fails to create virtual machines using the virtualbox driver

In case docker-machine fails to create the VM's using the virtualbox driver, with the following:

VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory

Simply reload the virtualbox via CLI with vboxreload.

Starting Docker breaks KVM bridged networking

This is a known issue. You can use the following workaround:

/etc/docker/daemon.json
{
  "iptables": false
}

If there is already a network bridge configured for KVM, this may be fixable by telling docker about it. See [16] where docker configuration is modified as:

/etc/docker/daemon.json
{
  "bridge": "existing_bridge_name"
}

Be sure to replace existing_bridge_name with the actual name of your network bridge.

Image pulls from Docker Hub are rate limited

Beginning on November 1st 2020, rate limiting is enabled for downloads from Docker Hub from anonymous and free accounts. See the rate limit documentation for more information.

Unauthenticated rate limits are tracked by source IP. Authenticated rate limits are tracked by account.

If you need to exceed the rate limits, you can either sign up for a paid plan or mirror the images you need to a different image registry. You can host your own registry or use a cloud hosted registry such as Amazon ECR, Google Container Registry, Azure Container Registry or Quay Container Registry.

To mirror an image, use the pull, tag and push subcommands of the Docker CLI. For example, to mirror the 1.19.3 tag of the Nginx image to a registry hosted at cr.example.com:

$ docker pull nginx:1.19.3
$ docker tag nginx:1.19.3 cr.example.com/nginx:1.19.3
$ docker push cr.example.com/nginx:1.19.3

You can then pull or run the image from the mirror:

$ docker pull cr.example.com/nginx:1.19.3
$ docker run cr.example.com/nginx:1.19.3

iptables (legacy): unknown option "--dport"

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: Nftables#Working with Docker advises to not use iptables-nft. (Discuss in Talk:Docker)

If you see this error when running a container, install iptables-nft instead of iptables (legacy) and reboot[17].

See also