Kahibaro
Discord Login Register

10.1 Viewing Logs

Why Container Logs Matter

Logs are the primary way to see what is happening inside a running container without attaching a debugger or a shell. When something fails, starts slowly, or behaves strangely, logs usually give the first clue. Docker captures the standard output and standard error streams of your container processes and lets you view them from the host.

Understanding how to access and interpret these logs is essential before you try more advanced debugging tools.

How Docker Captures Logs

When a container runs, the main process inside it writes messages to two standard streams: stdout and stderr. Docker attaches to these streams by default and stores what they produce according to a chosen logging driver.

From the perspective of everyday use, the default driver usually keeps logs on the host in plain text format. You rarely need to know the exact file path at this stage, because you interact with logs through Docker commands rather than directly reading those files.

The important idea is that what your application prints to stdout and stderr becomes visible through Docker logging commands.

Always write application logs to stdout and stderr inside containers. Do not build custom log files in arbitrary paths if you want Docker logging tools to see them easily.

Viewing Logs for a Single Container

The core command for viewing logs is:

$ docker logs <container>

You can use either the container name or its ID. If you start a container without naming it explicitly, Docker assigns a random name. You can see this name with the command that lists containers, then reuse it with the logging command.

When you run the logs command without extra options, Docker prints everything captured so far for that container and then exits. This is similar to running a simple file dump.

Because logs are linked to a specific container instance, if you remove the container, you also lose easy access to its logs through the logs command.

Following Logs in Real Time

For live debugging, you often want to watch logs as they are generated. This is useful when you start a container and want to confirm that it initializes correctly or when you try to reproduce a problem.

To stream new log lines continuously, use the follow flag:

$ docker logs -f <container>

This keeps the terminal open, showing new entries as the container writes them. It behaves similarly to the classic tail command with a follow option on a file, but here it works at the Docker level.

To stop following logs, you typically interrupt the command with your terminal’s standard interrupt key combination.

Limiting Log Output

Containers that run for a long time can produce a very large amount of log data. Showing everything at once can be overwhelming and slow. Docker provides options to limit what you see when you run the logs command.

Two common options are time based and line based filters.

The since option shows only entries from a certain time:

$ docker logs --since 10m <container>

In this example, you would only see log messages from the last 10 minutes. You can also pass an absolute time as a timestamp value, but relative values are usually enough for basic work.

The tail option limits the output to the last N lines:

$ docker logs --tail 100 <container>

This prints only the 100 most recent lines of the logs. You can combine tail with follow to watch recent history and new lines:

$ docker logs --tail 100 -f <container>

With this combination, you avoid scrolling through huge logs and still catch the current behavior.

Use tail and since when dealing with long running containers. This avoids flooding your terminal and makes it easier to focus on relevant errors.

Handling stdout and stderr

By default, the logs command merges stdout and stderr into a single stream. For many situations this is fine because you only care about the message content.

If you need to distinguish between normal output and errors, Docker offers options to restrict what you see. You can filter to show just stdout or only stderr messages, depending on what type of problem you investigate.

This separation becomes useful when your application writes informational data to stdout, but only error conditions to stderr. You can then focus directly on failures without distraction.

Logs from Stopped Containers

Once a container stops, its logs do not disappear immediately. As long as the container object still exists on the host, the logs command continues to work.

You can inspect what happened before a crash or failure by running:

$ docker logs <stopped-container>

This allows you to debug problems that caused the container to exit unexpectedly, without needing to reproduce the issue in real time.

However, if you remove the container, you also lose its logs through the normal Docker interface. For long term retention, log forwarding or external aggregation is handled at a different stage, not within basic log viewing.

Timestamps and Log Format

The raw messages that your application prints do not automatically include timestamps. Docker can prepend timestamps when it stores or displays logs, which helps when you correlate events across different containers or external systems.

To include timestamps in the output, you can enable them with a command line option when viewing logs. This inserts a time value at the beginning of each line, following a consistent format.

You can then better understand the sequence of events, especially if multiple containers interact.

Always enable timestamps when you need to correlate issues between multiple containers or with external systems such as databases or load balancers.

Common Patterns When Using Logs

In daily work, several patterns appear repeatedly. When starting a new container, you often view its logs immediately to confirm that it started correctly. For example, you might start a service, then run logs with the follow option to watch initialization steps and check for configuration errors.

When an application misbehaves after running for some time, you first limit the output to recent entries, using since or tail, so you can quickly scan for errors. You then refine your search by focusing on a specific container or on error streams only.

When a container fails and exits, you inspect its logs to find the last error messages before it stopped. This can reveal missing environment variables, connection problems, or permission issues.

These simple practices form the basis of effective debugging, before you move to more advanced tools like executing commands inside containers or inspecting container metadata in detail.

The Relationship Between Logging and Application Design

The usefulness of Docker logs depends heavily on how the application inside the container handles logging. If the application writes clear, structured messages to stdout and stderr, Docker logging becomes powerful and convenient.

If, instead, the application logs to arbitrary files deep inside the filesystem, you need additional steps to reach those files, and the basic logs command no longer shows everything you need.

For a container friendly design, configure applications to output logs to standard streams, rely on Docker to capture and expose them, and then integrate external log aggregation later in your workflow. This approach keeps the immediate viewing of logs simple and consistent across different images and services.

Views: 7

Comments

Please login to add a comment.

Don't have an account? Register now!