distrobox - Man Page
distrobox assemble distrobox-assemble
Examples (TL;DR)
- View documentation for creating containers:
tldr distrobox-create
- View documentation for listing container's information:
tldr distrobox-list
- View documentation for entering the container:
tldr distrobox-enter
- View documentation for executing a command on the host from inside a container:
tldr distrobox-host-exec
- View documentation for exporting app/service/binary from the container to the host:
tldr distrobox-export
- View documentation for upgrading containers:
tldr distrobox-upgrade
- View documentation for stopping the containers:
tldr distrobox-stop
- View documentation for removing the containers:
tldr distrobox-rm
Description
distrobox-assemble takes care of creating or destroying containers in batches, based on a manifest file. The manifest file by default is ./distrobox.ini
, but can be specified using the --file
flag.
Synopsis
distrobox assemble
--file: path or URL to the distrobox manifest/ini file --name/-n: run against a single entry in the manifest/ini file --replace/-R: replace already existing distroboxes with matching names --dry-run/-d: only print the container manager command generated --verbose/-v: show more verbosity --version/-V: show version
Examples
This is an example manifest file to create two containers:
[ubuntu] additional_packages="git vim tmux nodejs" image=ubuntu:latest init=false nvidia=false pull=true root=false replace=true start_now=false # You can add comments using this # [arch] # also inline comments are supported additional_packages="git vim tmux nodejs" home=/tmp/home image=archlinux:latest init=false start_now=true init_hooks="touch /init-normal" nvidia=true pre_init_hooks="touch /pre-init" pull=true root=false replace=false volume="/tmp/test:/run/a /tmp/test:/run/b"
Create
We can bring them up simply using
distrobox assemble create
If the file is called distrobox.ini
and is in the same directory you’re launching the command, no further arguments are needed. You can specify a custom path for the file using
distrobox assemble create --file /my/custom/path.ini
Or even specify a remote file, by using an URL:
distrobox-assemble create --file https://raw.githubusercontent.com/89luca89/dotfiles/master/distrobox.ini
Replace
By default, distrobox assemble
will replace a container only if replace=true
is specified in the manifest file.
In the example of the manifest above, the ubuntu container will always be replaced when running distrobox assemble create
, while the arch container will not.
To force a replace for all containers in a manifest use the --replace
flag
distrobox assemble create --replace [--file my/custom/path.ini]
Remove
We can bring down all the containers in a manifest file by simply doing
distrobox assemble rm
Or using a custom path for the ini file
distrobox assemble rm --file my/custom/path.ini
Test
You can always test what distrobox would do by using the --dry-run
flag. This command will only print what commands distrobox would do without actually running them.
Available options
This is a list of available options with the corresponding type:
Types legend:
- bool: true or false
- string: a single string, for example
home="/home/luca-linux/dbox"
string_list: multiple strings, for example
additional_packages="htop vim git"
. Note thatstring_list
can be declared multiple times to be compounded:[ubuntu] image=ubuntu:latest additional_packages="git vim tmux nodejs" additional_packages="htop iftop iotop" additional_packages="zsh fish"
Flag Name | Type | |
additional_flags | string_list | Additional flags to pass to the container manager |
additional_packages | string_list | Additional packages to install inside the container |
home | string | Which home directory should the container use |
image | string | Which image should the container use, look here for a list |
init_hooks | string_list | Commands to run inside the container, after the packages setup |
pre_init_hooks | string_list | Commands to run inside the container, before the packages setup |
volume | string_list | Additional volumes to mount inside the containers |
exported_apps | string_list | App names or desktopfile paths to export |
exported_bins | string_list | Binaries to export |
exported_bins_path | string | Optional path where to export binaries (default: $HOME/.local/bin) |
entry | bool | Generate an entry for the container in the app list (default: false) |
start_now | bool | Start the container immediately (default: false) |
init | bool | Specify if this is an initful container (default: false) |
nvidia | bool | Specify if you want to enable NVidia drivers integration (default: false) |
pull | bool | Specify if you want to pull the image every time (default: false) |
root | bool | Specify if the container is rootful (default: false) |
unshare_ipc | bool | Specify if the container should unshare the ipc namespace (default: false) |
unshare_netns | bool | Specify if the container should unshare the network namespace (default: false) |
unshare_process | bool | Specify if the container should unshare the process (pid) namespace (default: false) |
unshare_devsys | bool | Specify if the container should unshare /dev (default: false) |
unshare_all | bool | Specify if the container should unshare all the previous options (default: false) |
For further explanation of each of the option in the list, take a look at the distrobox create usage, each option corresponds to one of the create
flags.
Advanced example
[tumbleweed_distrobox] image=registry.opensuse.org/opensuse/distrobox pull=true additional_packages="acpi bash-completion findutils iproute iputils sensors inotify-tools unzip" additional_packages="net-tools nmap openssl procps psmisc rsync man tig tmux tree vim htop xclip yt-dlp" additional_packages="git git-credential-libsecret" additional_packages="patterns-devel-base-devel_basis" additional_packages="ShellCheck ansible-lint clang clang-tools codespell ctags desktop-file-utils gcc golang jq python3" additional_packages="python3-bashate python3-flake8 python3-mypy python3-pipx python3-pycodestyle python3-pyflakes python3-pylint python3-python-lsp-server python3-rstcheck python3-yapf python3-yamllint rustup shfmt" additional_packages="kubernetes-client helm" init_hooks=GOPATH="${HOME}/.local/share/system-go" GOBIN=/usr/local/bin go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest; init_hooks=GOPATH="${HOME}/.local/share/system-go" GOBIN=/usr/local/bin go install github.com/onsi/ginkgo/v2/ginkgo@latest; init_hooks=GOPATH="${HOME}/.local/share/system-go" GOBIN=/usr/local/bin go install golang.org/x/tools/cmd/goimports@latest; init_hooks=GOPATH="${HOME}/.local/share/system-go" GOBIN=/usr/local/bin go install golang.org/x/tools/gopls@latest; init_hooks=GOPATH="${HOME}/.local/share/system-go" GOBIN=/usr/local/bin go install sigs.k8s.io/kind@latest; init_hooks=ln -sf /usr/bin/distrobox-host-exec /usr/local/bin/conmon; init_hooks=ln -sf /usr/bin/distrobox-host-exec /usr/local/bin/crun; init_hooks=ln -sf /usr/bin/distrobox-host-exec /usr/local/bin/docker; init_hooks=ln -sf /usr/bin/distrobox-host-exec /usr/local/bin/docker-compose; init_hooks=ln -sf /usr/bin/distrobox-host-exec /usr/local/bin/flatpak; init_hooks=ln -sf /usr/bin/distrobox-host-exec /usr/local/bin/podman; init_hooks=ln -sf /usr/bin/distrobox-host-exec /usr/local/bin/xdg-open; exported_apps="htop" exported_bins="/usr/bin/htop /usr/bin/git" exported_bins_path="~/.local/bin"
Compatibility
This project does not need a dedicated image. It can use any OCI images from docker-hub, quay.io, or any registry of your choice.
Many cloud images are stripped down on purpose to save size and may not include commands such as which
, mount
, less
or vi
). Additional packages can be installed once inside the container. We recommend using your preferred automation tool inside the container if you find yourself having to repeatedly create new containers. Maintaining your own custom image is also an option.
The main concern is having basic Linux utilities (mount
), basic user management utilities (usermod, passwd
), and sudo
correctly set.
Supported Container Managers
distrobox
can run on either podman
, docker
or lilipod
(https://github.com/89luca89/lilipod)
It depends either on podman
configured in rootless mode
or on docker
configured without sudo (follow THESE instructions (https://docs.docker.com/engine/install/linux-postinstall/))
- Minimum podman version: 2.1.0
- Minimum docker client version: 19.03.15
- Minimum lilipod version: v0.0.1
Follow the official installation guide here:
Containers Distros
Distrobox guests tested successfully with the following container images:
Distro | Version | Images |
AlmaLinux (Toolbox) | 8 9 | quay.io/toolbx-images/almalinux-toolbox:8 quay.io/toolbx-images/almalinux-toolbox:9 quay.io/toolbx-images/almalinux-toolbox:latest |
Alpine (Toolbox) | 3.16 3.17 3.18 3.19 3.20 edge | quay.io/toolbx-images/alpine-toolbox:3.16 quay.io/toolbx-images/alpine-toolbox:3.17 quay.io/toolbx-images/alpine-toolbox:3.18 quay.io/toolbx-images/alpine-toolbox:3.19 quay.io/toolbx-images/alpine-toolbox:3.20 quay.io/toolbx-images/alpine-toolbox:edge quay.io/toolbx-images/alpine-toolbox:latest |
AmazonLinux (Toolbox) | 2 2022 | quay.io/toolbx-images/amazonlinux-toolbox:2 quay.io/toolbx-images/amazonlinux-toolbox:2023 quay.io/toolbx-images/amazonlinux-toolbox:latest |
Archlinux (Toolbox) | quay.io/toolbx/arch-toolbox:latest | |
Bazzite Arch | ghcr.io/ublue-os/bazzite-arch:latest ghcr.io/ublue-os/bazzite-arch-gnome:latest | |
Centos (Toolbox) | stream8 stream9 | quay.io/toolbx-images/centos-toolbox:stream8 quay.io/toolbx-images/centos-toolbox:stream9 quay.io/toolbx-images/centos-toolbox:latest |
Debian (Toolbox) | 10 11 12 testing unstable | quay.io/toolbx-images/debian-toolbox:10 quay.io/toolbx-images/debian-toolbox:11 quay.io/toolbx-images/debian-toolbox:12 quay.io/toolbx-images/debian-toolbox:testing quay.io/toolbx-images/debian-toolbox:unstable quay.io/toolbx-images/debian-toolbox:latest |
Fedora (Toolbox) | 37 38 39 40 Rawhide | registry.fedoraproject.org/fedora-toolbox:37 registry.fedoraproject.org/fedora-toolbox:38 registry.fedoraproject.org/fedora-toolbox:39 registry.fedoraproject.org/fedora-toolbox:40 registry.fedoraproject.org/fedora-toolbox:rawhide |
openSUSE (Toolbox) | registry.opensuse.org/opensuse/distrobox:latest | |
RedHat (Toolbox) | 8 9 | registry.access.redhat.com/ubi8/toolbox registry.access.redhat.com/ubi9/toolbox |
Rocky Linux (Toolbox) | 8 9 | quay.io/toolbx-images/rockylinux-toolbox:8 quay.io/toolbx-images/rockylinux-toolbox:9 quay.io/toolbx-images/rockylinux-toolbox:latest |
Ubuntu (Toolbox) | 16.04 18.04 20.04 22.04 24.04 | quay.io/toolbx/ubuntu-toolbox:16.04 quay.io/toolbx/ubuntu-toolbox:18.04 quay.io/toolbx/ubuntu-toolbox:20.04 quay.io/toolbx/ubuntu-toolbox:22.04 quay.io/toolbx/ubuntu-toolbox:24.04 quay.io/toolbx/ubuntu-toolbox:latest |
Chainguard Wolfi (Toolbox) | quay.io/toolbx-images/wolfi-toolbox:latest | |
Ublue | bluefin-cli ubuntu-toolbox fedora-toolbox wolfi-toolbox archlinux-distrobox powershell-toolbox | ghcr.io/ublue-os/bluefin-cli ghcr.io/ublue-os/bluefin-cli ghcr.io/ublue-os/ubuntu-toolbox ghcr.io/ublue-os/fedora-toolbox ghcr.io/ublue-os/wolfi-toolbox ghcr.io/ublue-os/arch-distrobox ghcr.io/ublue-os/powershell-toolbox |
AlmaLinux | 8 8-minimal 9 9-minimal | docker.io/library/almalinux:8 docker.io/library/almalinux:9 |
Alpine Linux | 3.15 3.16 3.17 3.18 3.19 3.20 edge | docker.io/library/alpine:3.15 docker.io/library/alpine:3.16 docker.io/library/alpine:3.17 docker.io/library/alpine:3.18 docker.io/library/alpine:3.19 docker.io/library/alpine:3.20 docker.io/library/alpine:edge docker.io/library/alpine:latest |
AmazonLinux | 1 2 2023 | public.ecr.aws/amazonlinux/amazonlinux:1 public.ecr.aws/amazonlinux/amazonlinux:2 public.ecr.aws/amazonlinux/amazonlinux:2023 |
Archlinux | docker.io/library/archlinux:latest | |
Blackarch | docker.io/blackarchlinux/blackarch:latest | |
CentOS Stream | 8 9 | quay.io/centos/centos:stream8 quay.io/centos/centos:stream9 |
Chainguard Wolfi | cgr.dev/chainguard/wolfi-base:latest | |
ClearLinux | docker.io/library/clearlinux:latest docker.io/library/clearlinux:base | |
Crystal Linux | registry.gitlab.com/crystal-linux/misc/docker:latest | |
Debian | 7 8 9 10 11 12 | docker.io/debian/eol:wheezy docker.io/library/debian:buster docker.io/library/debian:bullseye-backports docker.io/library/debian:bookworm-backports docker.io/library/debian:stable-backports |
Debian | Testing | docker.io/library/debian:testing docker.io/library/debian:testing-backports |
Debian | Unstable | docker.io/library/debian:unstable |
deepin | 20 (apricot) 23 (beige) | docker.io/linuxdeepin/apricot |
Fedora | 36 37 38 39 40 Rawhide | quay.io/fedora/fedora:36 quay.io/fedora/fedora:37 quay.io/fedora/fedora:38 quay.io/fedora/fedora:39 quay.io/fedora/fedora:40 quay.io/fedora/fedora:rawhide |
Gentoo Linux | rolling | docker.io/gentoo/stage3:latest |
KDE neon | Latest | invent-registry.kde.org/neon/docker-images/plasma:latest |
Kali Linux | rolling | docker.io/kalilinux/kali-rolling:latest |
Mint | 21.1 | docker.io/linuxmintd/mint21.1-amd64 |
Neurodebian | nd100 | docker.io/library/neurodebian:nd100 |
openSUSE | Leap | registry.opensuse.org/opensuse/leap:latest |
openSUSE | Tumbleweed | registry.opensuse.org/opensuse/distrobox:latest registry.opensuse.org/opensuse/tumbleweed:latest registry.opensuse.org/opensuse/toolbox:latest |
Oracle Linux | 7 7-slim 8 8-slim 9 9-slim | container-registry.oracle.com/os/oraclelinux:7 container-registry.oracle.com/os/oraclelinux:7-slim container-registry.oracle.com/os/oraclelinux:8 container-registry.oracle.com/os/oraclelinux:8-slim container-registry.oracle.com/os/oraclelinux:9 container-registry.oracle.com/os/oraclelinux:9-slim |
RedHat (UBI) | 7 8 9 | registry.access.redhat.com/ubi7/ubi registry.access.redhat.com/ubi8/ubi registry.access.redhat.com/ubi8/ubi-init registry.access.redhat.com/ubi8/ubi-minimal registry.access.redhat.com/ubi9/ubi registry.access.redhat.com/ubi9/ubi-init registry.access.redhat.com/ubi9/ubi-minimal |
Rocky Linux | 8 8-minimal 9 | quay.io/rockylinux/rockylinux:8 quay.io/rockylinux/rockylinux:8-minimal quay.io/rockylinux/rockylinux:9 quay.io/rockylinux/rockylinux:latest |
Slackware | docker.io/vbatts/slackware:current | |
SteamOS | ghcr.io/linuxserver/steamos:latest | |
Ubuntu | 14.04 16.04 18.04 20.04 22.04 23.04 | docker.io/library/ubuntu:14.04 docker.io/library/ubuntu:16.04 docker.io/library/ubuntu:18.04 docker.io/library/ubuntu:20.04 docker.io/library/ubuntu:22.04 docker.io/library/ubuntu:23.04 |
Vanilla OS | VSO | ghcr.io/vanilla-os/vso:main |
Void Linux | glibc musl | ghcr.io/void-linux/void-glibc-full:latest ghcr.io/void-linux/void-musl-full:latest |
Images marked with Toolbox are tailored images made by the community efforts in toolbx-images/images (https://github.com/toolbx-images/images), so they are more indicated for desktop use, and first setup will take less time. Note however that if you use a non-toolbox preconfigured image, the first distrobox-enter
you’ll perform can take a while as it will download and install the missing dependencies.
A small time tax to pay for the ability to use any type of image. This will not occur after the first time, subsequent enters will be much faster.
NixOS is not a supported container distro, and there are currently no plans to bring support to it. If you are looking for unprivileged NixOS environments, we suggest you look into nix-shell (https://nixos.org/manual/nix/unstable/command-ref/nix-shell.html) or nix portable (https://github.com/DavHau/nix-portable)
New Distro Support
If your distro of choice is not on the list, open an issue requesting support for it, we can work together to check if it is possible to add support for it.
Or just try using it anyway, if it works, open an issue and it will be added to the list!
Older Distributions
For older distributions like CentOS 5, CentOS 6, Debian 6, Ubuntu 12.04, compatibility is not assured.
Their libc
version is incompatible with kernel releases after >=4.11
. A work around this is to use the vsyscall=emulate
flag in the bootloader of the host.
Keep also in mind that mirrors could be down for such old releases, so you will need to build a custom distrobox image to ensure basic dependencies are met.
Gpu Acceleration Support
For Intel and AMD Gpus, the support is baked in, as the containers will install their latest available mesa/dri drivers.
For NVidia, you can use the --nvidia
flag during create, see distrobox-create documentation to discover how to use it.
Alternatively, you can use the nvidia-container-toolkit utility to set up the integration independently from the distrobox’s own flag.
Name
distrobox create distrobox-create
Description
distrobox-create takes care of creating the container with input name and image. The created container will be tightly integrated with the host, allowing sharing of the HOME directory of the user, external storage, external usb devices and graphical apps (X11/Wayland), and audio.
Synopsis
distrobox create
--image/-i: image to use for the container default: ${container_image_default} --name/-n: name for the distrobox default: ${container_name_default} --hostname: hostname for the distrobox default: <container-name>.$(uname -n) --pull/-p: pull the image even if it exists locally (implies --yes) --yes/-Y: non-interactive, pull images without asking --root/-r: launch podman/docker/lilipod with root privileges. Note that if you need root this is the preferred way over "sudo distrobox" (note: if using a program other than 'sudo' for root privileges is necessary, specify it through the DBX_SUDO_PROGRAM env variable, or 'distrobox_sudo_program' config variable) --clone/-c: name of the distrobox container to use as base for a new container this will be useful to either rename an existing distrobox or have multiple copies of the same environment. --home/-H: select a custom HOME directory for the container. Useful to avoid host's home littering with temp files. --volume: additional volumes to add to the container --additional-flags/-a: additional flags to pass to the container manager command --additional-packages/-ap: additional packages to install during initial container setup --init-hooks: additional commands to execute at the end of container initialization --pre-init-hooks: additional commands to execute at the start of container initialization --init/-I: use init system (like systemd) inside the container. this will make host's processes not visible from within the container. (assumes --unshare-process) may require additional packages depending on the container image: https://github.com/89luca89/distrobox/blob/main/docs/useful_tips.md#using-init-system-inside-a-distrobox --nvidia: try to integrate host's nVidia drivers in the guest --unshare-devsys: do not share host devices and sysfs dirs from host --unshare-groups: do not forward user's additional groups into the container --unshare-ipc: do not share ipc namespace with host --unshare-netns: do not share the net namespace with host --unshare-process: do not share process namespace with host --unshare-all: activate all the unshare flags below --compatibility/-C: show list of compatible images --help/-h: show this message --no-entry: do not generate a container entry in the application list --dry-run/-d: only print the container manager command generated --verbose/-v: show more verbosity --version/-V: show version --absolutely-disable-root-password-i-am-really-positively-sure: ⚠️ ⚠️ when setting up a rootful distrobox, this will skip user password setup, leaving it blank. ⚠️ ⚠️
Compatibility
for a list of compatible images and container managers, please consult the man page: man distrobox man distrobox-compatibility or consult the documentation page on: https://github.com/89luca89/distrobox/blob/main/docs/compatibility.md#containers-distros
Examples
Create a distrobox with image alpine, called my-alpine container
distrobox create --image alpine my-alpine-container
Create a distrobox from fedora-toolbox:35 image
distrobox create --image registry.fedoraproject.org/fedora-toolbox:35 --name fedora-toolbox-35
Clone an existing distrobox container
distrobox create --clone fedora-35 --name fedora-35-copy
Always pull for the new image when creating a distrobox
distrobox create --pull --image centos:stream9 --home ~/distrobox/centos9
Add additional environment variables to the container
distrobox create --image fedora:35 --name test --additional-flags "--env MY_VAR=value"
Add additional volumes to the container
distrobox create --image fedora:35 --name test --volume /opt/my-dir:/usr/local/my-dir:rw --additional-flags "--pids-limit -1"
Add additional packages to the container
distrobox create --image alpine:latest --name test2 --additional-packages "git tmux vim"
Use init-hooks to perform an action during container startup
distrobox create --image alpine:latest --name test --init-hooks "touch /var/tmp/test1 && touch /var/tmp/test2"
Use pre-init-hooks to perform an action at the beginning of the container startup (before any package manager starts)
distrobox create -i docker.io/almalinux/8-init --init --name test --pre-init-hooks "dnf config-manager --enable powertools && dnf -y install epel-release"
Use init to create a Systemd container (acts similar to an LXC):
distrobox create -i ubuntu:latest --name test --additional-packages "systemd libpam-systemd pipewire-audio-client-libraries" --init
Use init to create a OpenRC container (acts similar to an LXC):
distrobox create -i alpine:latest --name test --additional-packages "openrc" --init
Use host’s NVidia drivers integration
distrobox create --image ubuntu:22.04 --name ubuntu-nvidia --nvidia
Do not use host’s IP inside the container:
distrobox create --image ubuntu:latest --name test --unshare-netns
Create a more isolated container, where only the $HOME, basic sockets and host’s FS (in /run/host) is shared:
distrobox create --name unshared-test --unshare-all
Create a more isolated container, with it’s own init system, this will act very similar to a full LXC container:
distrobox create --name unshared-init-test --unshare-all --init --image fedora:latest
Use environment variables to specify container name, image and container manager:
DBX_CONTAINER_MANAGER="docker" DBX_NON_INTERACTIVE=1 DBX_CONTAINER_NAME=test-alpine DBX_CONTAINER_IMAGE=alpine distrobox-create
Environment Variables
DBX_CONTAINER_ALWAYS_PULL DBX_CONTAINER_CUSTOM_HOME DBX_CONTAINER_HOME_PREFIX DBX_CONTAINER_IMAGE DBX_CONTAINER_MANAGER DBX_CONTAINER_NAME DBX_CONTAINER_HOSTNAME DBX_NON_INTERACTIVE DBX_SUDO_PROGRAM
DBX_CONTAINER_HOME_PREFIX defines where containers’ home directories will be located. If you define it as ~/dbx then all future containers’ home directories will be ~/dbx/$container_name
Extra
The --additional-flags
or -a
is useful to modify defaults in the container creations. For example:
distrobox create -i docker.io/library/archlinux -n dev-arch podman container inspect dev-arch | jq '.[0].HostConfig.PidsLimit' 2048 distrobox rm -f dev-arch distrobox create -i docker.io/library/archlinux -n dev-arch --volume $CBL_TC:/tc --additional-flags "--pids-limit -1" podman container inspect dev-arch | jq '.[0].HostConfig,.PidsLimit' 0
Additional volumes can be specified using the --volume
flag. This flag follows the same standard as docker
and podman
to specify the mount point so --volume SOURCE_PATH:DEST_PATH:MODE
.
distrobox create --image docker.io/library/archlinux --name dev-arch --volume /usr/share/:/var/test:ro
During container creation, it is possible to specify (using the additional-flags) some environment variables that will persist in the container and be independent from your environment:
distrobox create --image fedora:35 --name test --additional-flags "--env MY_VAR=value"
The --init-hooks
is useful to add commands to the entrypoint (init) of the container. This could be useful to create containers with a set of programs already installed, add users, groups.
distrobox create --image fedora:35 --name test --init-hooks "dnf groupinstall -y \"C Development Tools and Libraries\""
The --init
is useful to create a container that will use its own separate init system within. For example using:
distrobox create -i docker.io/almalinux/8-init --init --name test distrobox create -i docker.io/library/debian --additional-packages "systemd" --init --name test-debian
Inside the container we will be able to use normal systemd units:
~$ distrobox enter test user@test:~$ sudo systemctl enable --now sshd user@test:~$ sudo systemctl status sshd ● sshd.service - OpenSSH server daemon Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2022-01-28 22:54:50 CET; 17s ago Docs: man:sshd(8) man:sshd_config(5) Main PID: 291 (sshd)
Note that enabling --init
will disable host’s process integration. From within the container you will not be able to see and manage host’s processes. This is needed because /sbin/init
must be pid 1.
If you want to use a non-pre-create image, you’ll need to add the additional package:
distrobox create -i alpine:latest --init --additional-packages "openrc" -n test distrobox create -i debian:stable --init --additional-packages "systemd libpam-systemd pipewire-audio-client-libraries" -n test distrobox create -i ubuntu:22.04 --init --additional-packages "systemd libpam-systemd pipewire-audio-client-libraries" -n test distrobox create -i archlinux:latest --init --additional-packages "systemd" -n test distrobox create -i registry.opensuse.org/opensuse/tumbleweed:latest --init --additional-packages "systemd" -n test distrobox create -i registry.fedoraproject.org/fedora:39 --init --additional-packages "systemd" -n test
The --init
flag is useful to create system containers, where the container acts more similar to a full VM than an application-container. Inside you’ll have a separate init, user-session, daemons and so on.
The --home
flag let’s you specify a custom HOME for the container. Note that this will NOT prevent the mount of the host’s home directory, but will ensure that configs and dotfiles will not litter it.
The --root
flag will let you create a container with real root privileges. At first enter
the user will be required to setup a password. This is done in order to not enable passwordless sudo/su, in a rootful container, this is needed because in this mode, root inside the container is also root outside the container!
The --absolutely-disable-root-password-i-am-really-positively-sure
will skip user password setup, leaving it blank. This is genuinely dangerous and you really, positively should NOT enable this.
From version 1.4.0 of distrobox, when you create a new container, it will also generate an entry in the applications list.
NVidia integration
If your host has an NVidia gpu, with installed proprietary drivers, you can integrate them with the guests by using the --nvidia
flag:
distrobox create --nvidia --image ubuntu:latest --name ubuntu-nvidia
Be aware that this is not compatible with non-glibc systems and needs somewhat newer distributions to work.
This feature was tested working on:
- Almalinux
- Archlinux
- Centos 7 and newer
- Clearlinux
- Debian 10 and newer
- OpenSUSE Leap
- OpenSUSE Tumbleweed
- Rockylinux
- Ubuntu 18.04 and newer
- Void Linux (glibc)
Name
distrobox enter distrobox-enter
Description
distrobox-enter takes care of entering the container with the name specified. Default command executed is your SHELL, but you can specify different shells or entire commands to execute. If using it inside a script, an application, or a service, you can specify the –headless mode to disable tty and interactivity.
Synopsis
distrobox enter
--name/-n: name for the distrobox default: my-distrobox --/-e: end arguments execute the rest as command to execute at login default: default ${USER}'s shell --no-tty/-T: do not instantiate a tty --no-workdir/-nw: always start the container from container's home directory --additional-flags/-a: additional flags to pass to the container manager command --help/-h: show this message --root/-r: launch podman/docker/lilipod with root privileges. Note that if you need root this is the preferred way over "sudo distrobox" (note: if using a program other than 'sudo' for root privileges is necessary, specify it through the DBX_SUDO_PROGRAM env variable, or 'distrobox_sudo_program' config variable) --dry-run/-d: only print the container manager command generated --verbose/-v: show more verbosity --version/-V: show version
Examples
Enter a distrobox named “example”
distrobox-enter example
Enter a distrobox specifying a command
distrobox-enter --name fedora-toolbox-35 -- bash -l distrobox-enter my-alpine-container -- sh -l
Use additional podman/docker/lilipod flags while entering a distrobox
distrobox-enter --additional-flags "--preserve-fds" --name test -- bash -l
Specify additional environment variables while entering a distrobox
distrobox-enter --additional-flags "--env MY_VAR=value" --name test -- bash -l MY_VAR=value distrobox-enter --additional-flags "--preserve-fds" --name test -- bash -l
You can also use environment variables to specify container manager and container name:
DBX_CONTAINER_MANAGER="docker" DBX_CONTAINER_NAME=test-alpine distrobox-enter
Environment Variables
DBX_CONTAINER_NAME DBX_CONTAINER_MANAGER DBX_SKIP_WORKDIR DBX_SUDO_PROGRAM
Extra
This command is used to enter the distrobox itself. Personally, I just create multiple profiles in my gnome-terminal
to have multiple distros accessible.
The --additional-flags
or -a
is useful to modify default command when executing in the container. For example:
distrobox enter -n dev-arch --additional-flags "--env my_var=test" -- printenv &| grep my_var my_var=test
This is possible also using normal env variables:
my_var=test distrobox enter -n dev-arch --additional-flags -- printenv &| grep my_var my_var=test
If you’d like to enter a rootful container having distrobox use a program other than `sudo' to run podman/docker/lilipod as root, such as `pkexec' or `doas', you may specify it with the DBX_SUDO_PROGRAM
environment variable. For example, to use `doas' to enter a rootful container:
DBX_SUDO_PROGRAM="doas" distrobox enter -n container --root
Additionally, in one of the config file paths that distrobox supports, such as ~/.distroboxrc
, you can also append the line distrobox_sudo_program="doas"
(for example) to always run distrobox commands involving rootful containers using `doas'.
Name
distrobox ephemeral distrobox-ephemeral
Description
distrobox-ephemeral creates a temporary distrobox that is automatically destroyed when the command is terminated.
Synopsis
distrobox ephemeral
--root/-r: launch podman/docker/lilipod with root privileges. Note that if you need root this is the preferred way over "sudo distrobox" (note: if using a program other than 'sudo' for root privileges is necessary, specify it through the DBX_SUDO_PROGRAM env variable, or 'distrobox_sudo_program' config variable) --verbose/-v: show more verbosity --help/-h: show this message --/-e: end arguments execute the rest as command to execute at login default: default ${USER}'s shell --version/-V: show version
Examples
distrobox-ephemeral --image alpine:latest -- cat /etc/os-release distrobox-ephemeral --root --verbose --image alpine:latest --volume /opt:/opt
You can also use flags from distrobox-create to customize the ephemeral container to run.
See Also
distrobox-create --help man distrobox-create
Environment Variables
distrobox-ephemeral calls distrobox-create, SEE ALSO distrobox-create(1) for a list of supported environment variables to use.
Name
distrobox-export
Description
Application and binary exporting
distrobox-export takes care of exporting an app or a binary from the container to the host.
The exported app will be easily available in your normal launcher and it will automatically be launched from the container it is exported from.
Synopsis
distrobox-export
--app/-a: name of the application to export or absolute path to desktopfile to export --bin/-b: absolute path of the binary to export --list-apps: list applications exported from this container --list-binaries list binaries exported from this container, use -ep to specify custom paths to search --delete/-d: delete exported application or binary --export-label/-el: label to add to exported application name. Use "none" to disable. Defaults to (on \$container_name) --export-path/-ep: path where to export the binary --extra-flags/-ef: extra flags to add to the command --enter-flags/-nf: flags to add to distrobox-enter --sudo/-S: specify if the exported item should be run as sudo --help/-h: show this message --verbose/-v: show more verbosity --version/-V: show version
You may want to install graphical applications or CLI tools in your distrobox. Using distrobox-export
from inside the container will let you use them from the host itself.
Examples
distrobox-export --app mpv [--extra-flags "flags"] [--delete] [--sudo] distrobox-export --bin /path/to/bin [--export-path ~/.local/bin] [--extra-flags "flags"] [--delete] [--sudo]
App export example
distrobox-export --app abiword
This tool will simply copy the original .desktop
files along with needed icons, add the prefix /usr/local/bin/distrobox-enter -n distrobox_name -e ...
to the commands to run, and save them in your home to be used directly from the host as a normal app.
distrobox-export --app /opt/application/my-app.desktop
This will skip searching for the desktopfile in canonical paths, and just use the provided file path.
Binary export example
distrobox-export --bin /usr/bin/code --extra-flags "--foreground" --export-path $HOME/.local/bin
In the case of exporting binaries, you will have to specify where to export it (--export-path
) and the tool will create a little wrapper script that will distrobox-enter -e
from the host, the desired binary. This can be handy with the use of direnv
to have different versions of the same binary based on your env
or project.
The exported binaries will be exported in the “–export-path” of choice as a wrapper script that acts naturally both on the host and in the container.
Additional flags
You can specify additional flags to add to the command, for example if you want to export an electron app, you could add the “–foreground” flag to the command:
distrobox-export --app atom --extra-flags "--foreground" distrobox-export --bin /usr/bin/vim --export-path ~/.local/bin --extra-flags "-p"
This works for binaries and apps. Extra flags are only used then the exported app or binary is used from the host, using them inside the container will not include them.
Unexport
The option “–delete” will un-export an app or binary
distrobox-export --app atom --delete distrobox-export --bin /usr/bin/vim --export-path ~/.local/bin --delete
Run as root in the container
The option “–sudo” will launch the exported item as root inside the distrobox.
Notes
Note you can use –app OR –bin but not together.
[IMAGE: app-export (https://user-images.githubusercontent.com/598882/144294795-c7785620-bf68-4d1b-b251-1e1f0a32a08d.png)]
NOTE: some electron apps such as vscode and atom need additional flags to work from inside the container, use the --extra-flags
option to provide a series of flags, for example:
distrobox-export --app atom --extra-flags "--foreground"
Name
distrobox generate-entry
Description
distrobox-generate-entry will create a desktop icon for one of the available distroboxes. This will be then deleted when you remove the matching distrobox.
Synopsis
distrobox generate-entry
--help/-h: show this message --all/-a: perform for all distroboxes --delete/-d: delete the entry --icon/-i: specify a custom icon [/path/to/icon] (default auto) --root/-r: perform on rootful distroboxes --verbose/-v: show more verbosity --version/-V: show version
Examples
Generate an entry for a container
distrobox generate-entry my-container-name
Specify a custom icon for the entry
distrobox generate-entry my-container-name --icon /path/to/icon.png
Generate an entry for all distroboxes
distrobox generate-entry --all
Delete an entry
distrobox generate-entry container-name --delete
Name
distrobox-host-exec
Description
distrobox-host-exec lets one execute command on the host, while inside of a container.
Under the hood, distrobox-host-exec uses host-spawn
a project that lets us execute commands back on the host. If the tool is not found the user will be prompted to install it.
Synopsis
Just pass to “distrobox-host-exec” any command and all its arguments, if any.
--help/-h: show this message --verbose/-v: show more verbosity --version/-V: show version --yes/-Y: Automatically answer yes to prompt: host-spawn will be installed on the guest system if host-spawn is not detected. This behaviour is default when running in a non-interactive shell.
If no command is provided, it will execute “$SHELL”.
Alternatively, use symlinks to make distrobox-host-exec
execute as that command:
~$: ln -s /usr/bin/distrobox-host-exec /usr/local/bin/podman ~$: ls -l /usr/local/bin/podman lrwxrwxrwx. 1 root root 51 Jul 11 19:26 /usr/local/bin/podman -> /usr/bin/distrobox-host-exec ~$: podman version ...this is executed on host...
Examples
distrobox-host-exec ls distrobox-host-exec bash -l distrobox-host-exec flatpak run org.mozilla.firefox distrobox-host-exec podman ps -a
Name
distrobox-init
Description
Init the distrobox (not to be launched manually)
distrobox-init is the entrypoint of a created distrobox. Note that this HAS to run from inside a distrobox, will not work if you run it from your host.
This is not intended to be used manually, but instead used by distrobox-create to set up the container’s entrypoint.
distrobox-init will take care of installing missing dependencies (eg. sudo), set up the user and groups, mount directories from the host to ensure the tight integration.
Synopsis
distrobox-init
--name/-n: user name --user/-u: uid of the user --group/-g: gid of the user --home/-d: path/to/home of the user --help/-h: show this message --additional-packages: packages to install in addition --init/-I: whether to use or not init --pre-init-hooks: commands to execute prior to init --nvidia: try to integrate host's nVidia drivers in the guest --upgrade/-U: run init in upgrade mode --verbose/-v: show more verbosity --version/-V: show version --: end arguments execute the rest as command to execute during init
Examples
distrobox-init --name test-user --user 1000 --group 1000 --home /home/test-user distrobox-init --upgrade
Name
distrobox list distrobox-list
Description
distrobox-list lists available distroboxes. It detects them and lists them separately from the rest of normal containers.
Synopsis
distrobox list
--help/-h: show this message --no-color: disable color formatting --root/-r: launch podman/docker/lilipod with root privileges. Note that if you need root this is the preferred way over "sudo distrobox" (note: if using a program other than 'sudo' for root privileges is necessary, specify it through the DBX_SUDO_PROGRAM env variable, or 'distrobox_sudo_program' config variable) --verbose/-v: show more verbosity --version/-V: show version
Examples
distrobox-list
You can also use environment variables to specify container manager
DBX_CONTAINER_MANAGER="docker" distrobox-list
Environment Variables
DBX_CONTAINER_MANAGER DBX_SUDO_PROGRAM
[IMAGE: image (https://user-images.githubusercontent.com/598882/147831082-24b5bc2e-b47e-49ac-9b1a-a209478c9705.png)]
Name
distrobox rm distrobox-rm
Description
distrobox-rm delete one of the available distroboxes.
Synopsis
distrobox rm
--all/-a: delete all distroboxes --force/-f: force deletion --rm-home: remove the mounted home if it differs from the host user's one --root/-r: launch podman/docker/lilipod with root privileges. Note that if you need root this is the preferred way over "sudo distrobox" (note: if using a program other than 'sudo' for root privileges is necessary, specify it through the DBX_SUDO_PROGRAM env variable, or 'distrobox_sudo_program' config variable) --help/-h: show this message --verbose/-v: show more verbosity --version/-V: show version
Examples
distrobox-rm container-name [--force] [--all]
You can also use environment variables to specify container manager and name:
DBX_CONTAINER_MANAGER="docker" DBX_CONTAINER_NAME=test-alpine distrobox-rm
Environment Variables
DBX_CONTAINER_MANAGER DBX_CONTAINER_NAME DBX_NON_INTERACTIVE DBX_SUDO_PROGRAM
Name
distrobox stop distrobox-stop
Description
distrobox-stop stop a running distrobox.
Distroboxes are left running, even after exiting out of them, so that subsequent enters are really quick. This is how they can be stopped.
Synopsis
distrobox stop
--all/-a: stop all distroboxes --yes/-Y: non-interactive, stop without asking --help/-h: show this message --root/-r: launch podman/docker/lilipod with root privileges. Note that if you need root this is the preferred way over "sudo distrobox" (note: if using a program other than 'sudo' for root privileges is necessary, specify it through the DBX_SUDO_PROGRAM env variable, or 'distrobox_sudo_program' config variable) --verbose/-v: show more verbosity --version/-V: show version
Examples
distrobox-stop container-name1 container-name2 distrobox-stop container-name distrobox-stop --all
You can also use environment variables to specify container manager and name:
DBX_CONTAINER_MANAGER="docker" DBX_CONTAINER_NAME=test-alpine distrobox-stop
Environment Variables
DBX_CONTAINER_MANAGER DBX_CONTAINER_NAME DBX_NON_INTERACTIVE DBX_SUDO_PROGRAM
Name
distrobox-upgrade
Description
distrobox-upgrade will enter the specified list of containers and will perform an upgrade using the container’s package manager.
Synopsis
distrobox upgrade
--help/-h: show this message --all/-a: perform for all distroboxes --running: perform only for running distroboxes --root/-r: launch podman/docker/lilipod with root privileges. Note that if you need root this is the preferred way over "sudo distrobox" (note: if using a program other than 'sudo' for root privileges is necessary, specify it through the DBX_SUDO_PROGRAM env variable, or 'distrobox_sudo_program' config variable) --verbose/-v: show more verbosity --version/-V: show version
Examples
Upgrade all distroboxes
distrobox-upgrade --all
Upgrade all running distroboxes
distrobox-upgrade --all --running
Upgrade a specific distrobox
distrobox-upgrade alpine-linux
Upgrade a list of distroboxes
distrobox-upgrade alpine-linux ubuntu22 my-distrobox123
Automatically update all distro
You can create a systemd service to perform distrobox-upgrade automatically, this example shows how to run it daily:
~/.config/systemd/user/distrobox-upgrade.service
[Unit] Description=distrobox-upgrade Automatic Update [Service] Type=simple ExecStart=distrobox-upgrade --all StandardOutput=null
~/.config/systemd/user/distrobox-upgrade.timer
[Unit] Description=distrobox-upgrade Automatic Update Trigger [Timer] OnBootSec=1h OnUnitInactiveSec=1d [Install] WantedBy=timers.target
Then simply do a systemctl --user daemon-reload && systemctl --user enable --now distrobox-upgrade.timer