Kahibaro
Discord Login Register

10.2 Executing Commands in Running Containers

Why Execute Commands Inside Running Containers

When a container is already running, you will often need to look inside it. You may want to check configuration files, test a command, inspect logs that are only available in the filesystem, or run a troubleshooting tool that exists only in the container image. Executing commands in a running container lets you interact with the container from the inside without stopping or rebuilding it.

This is a powerful way to debug, but also something to use with care, because anything you do inside the container this way might not be reflected in your Dockerfiles or configuration.

Important: Commands you run manually inside a running container are usually temporary and will be lost when the container stops, unless you intentionally persist data with volumes.

The `docker exec` Command

The central tool for running a command in a running container is docker exec. It attaches a new process to an existing container. The container must already be running. You do not use docker exec to start a new container from an image, only to interact with one that is already alive.

The basic pattern is:

docker exec [options] <container> <command> [args...]

The <container> part can be the container name or its ID. The <command> is any program that exists inside that container. For example, you can list a directory in a running container with:

docker exec my-container ls /

This runs ls / inside my-container. The container continues to run the rest of its processes as normal while this happens.

If you use the container ID, you can shorten it as long as it is unambiguous. Using a friendly container name is usually easier for humans.

Rule: docker exec only works on running containers. If the container is stopped, you must start it before you can execute commands inside it.

Interactive Shells with `-it`

For debugging, you often want an interactive shell inside the container. This gives you something similar to a normal terminal session, where you can type commands and see their output immediately.

To get an interactive shell, you combine two common options with docker exec:

-i tells Docker to keep STDIN open, which means you can type into the session.

-t allocates a pseudo TTY, which gives you a more natural terminal environment with features like line editing and colors.

Together they are usually written as -it. For example, on many Linux or Unix based images you can run:

docker exec -it my-container /bin/bash

or, if bash is not available,

docker exec -it my-container /bin/sh

This command gives you a shell prompt inside the container. From there, you can navigate directories, view configuration files, run utilities that are installed inside the image, and inspect the environment.

When you exit the shell, the container’s main process continues to run, unless that main process was the shell itself, which is a separate case associated with how the container was started.

Running One-Off Commands

Sometimes you do not want an interactive shell, just a single command. docker exec also handles this simple use case well.

For example, if you want to see the contents of a configuration file:

docker exec my-container cat /etc/config.conf

The output appears in your terminal and, once the command finishes, the extra process inside the container exits. The container itself keeps running.

You can also combine docker exec with debugging tools that are already available in the container. For example, in a container with curl installed, you can test an internal HTTP endpoint:

docker exec my-container curl http://localhost:8080/health

This is a common way to confirm that a service inside the container is listening as expected.

Choosing Between Shells and Direct Commands

Both patterns, an interactive shell and a one off command, are useful in different situations.

An interactive shell is helpful when you do not know exactly what you need to inspect yet, and you want to experiment, browse, or run several commands in a row. For example, you may inspect directory contents, try a quick edit, then run a test command, all from the same prompt.

A one off command is cleaner when you want to script behavior or just run a single diagnostic action. It is also easier to copy and paste into documentation or share with other team members.

Best practice: Prefer one off docker exec commands when you want repeatable diagnostics, and document them. Use interactive shells for exploration, then turn useful steps into documented commands or scripts.

Environment and Working Directory

Commands executed with docker exec run inside the container’s environment. They see the same environment variables that the container process has, and they see the filesystem from the container’s perspective, including any mounted volumes.

By default, the working directory is usually the root directory / unless the image or startup process sets something else for exec. You can change the working directory for a specific docker exec command by using the -w option:

docker exec -w /app my-container ls

This runs ls with /app as the current directory. This is useful when an application expects to be run from a particular path, for example to pick up configuration files relative to that directory.

User and Permissions Inside the Container

Sometimes a command fails inside a container because of permissions. The container might be configured to run as a non root user, and that user might not have access to everything you try to inspect.

By default, docker exec runs the command as the same user as the container’s main process. If the main process is non root, you will be non root as well. You can override which user is used for a specific exec command with the -u option.

For example, to run a command as root inside the container:

docker exec -u 0 my-container id

Here, 0 refers to the root user ID. You can also specify a username if it is defined in the container:

docker exec -u appuser my-container whoami

Use this carefully. If the container is intentionally running as a non privileged user for security reasons, switching to root only for debugging should be a deliberate and temporary choice.

Security note: Avoid routinely using docker exec with root inside containers. Use the least privilege necessary to debug the issue you are facing.

Multiple `exec` Sessions and Their Lifecycle

You can run several docker exec commands at the same time against one container. Each exec command starts a separate process inside the container. These processes are independent from each other, but they share the same container environment and filesystem.

Each exec process lives only as long as its command runs. When you exit an interactive shell, or when a one off command finishes, that process is gone. The container itself continues running, as long as its main process is still alive.

If the container stops while you are inside an exec session, your session ends. In that case, trying to run docker exec again will fail until the container is started again.

Debugging Patterns with `docker exec`

Executing commands inside running containers is central to many debugging techniques. You might inspect configuration, run health checks manually, or gather additional information that logs alone do not provide.

For example, if a web application inside the container cannot connect to a database, you can open an interactive shell and run network tools from inside the container to confirm connectivity. Or if an application reports a missing file, you can check its presence directly in the expected location.

Another common pattern is to verify environment variables as seen by the process inside the container. You can use:

docker exec my-container env

to print the environment. This can help you see if configuration values passed from the host are actually visible inside the container.

While these patterns are powerful, remember they are snapshots of the current container state. They do not automatically become part of your Docker configuration. If your debugging reveals that a file needs to be in a different place, or that a package is missing, you should update your Dockerfile or compose configuration so that future containers start correctly, instead of repeating manual changes with docker exec.

Rule: Treat docker exec as a diagnostic and exploration tool, not as your primary way to configure containers. Permanent changes should come from rebuilding images or adjusting container configuration.

When `docker exec` Is Not Enough

There are cases where simply running commands in a container is not enough. For instance, if the container fails very quickly during startup, you might not have time to exec into it. In such cases, you have to rely on other debugging tools that work with logs, configuration, or image inspection.

Also, some minimal images do not include diagnostic tools or shells at all. If you try to run /bin/bash and it does not exist, that might be because the image is designed to be very small. In that situation, you may need to use whatever tools are available inside the image, or create a separate debugging image with additional tools for investigation.

Even in these limited cases, the idea behind this chapter remains useful. Interacting from inside the container, when possible, gives you a closer view of what the application is actually experiencing.

Views: 5

Comments

Please login to add a comment.

Don't have an account? Register now!