Table of Contents
Why Create Custom Logs
In many real systems the default logs are not enough. Your application might need to record business events, debug details, or security‑relevant actions in a structured and predictable way. Custom logs solve this by giving you full control over what is logged, where it is stored, and how it is formatted.
Custom logging can be done at several layers. You can log via the system logging service, usually systemd-journald or rsyslog, or you can write to your own dedicated log files. Understanding how to integrate with the existing logging infrastructure makes your logs easier to manage, rotate, and monitor.
Important: Always decide who will consume a log before you create it. Log content and format should be designed for automated tools first and humans second.
Using `logger` to Integrate With System Logging
The simplest way to create custom logs that participate in the system logging pipeline is to use the logger command. logger sends messages to the system logger, which then stores them in the journal or forwards them to traditional log files.
At its most basic, you can run:
logger "Backup completed successfully"This creates a syslog‑style message with a default tag, typically your shell name. To create useful custom logs, you should set a program tag, facility, and severity.
The tag is set with -t:
logger -t backup-job "Backup completed successfully"
Facilities and severities come from the syslog model. A facility is a category such as auth, daemon, or local0. For custom applications, the local0 to local7 facilities are reserved. A severity is the importance level, for example info, warning, or err.
You can set both with:
logger -p local0.info -t backup-job "Backup job finished in 35 seconds"
This message will now appear as facility local0 and priority info. On systemd based systems, you can see it with journalctl:
journalctl -t backup-jobYou can also send structured data by embedding key‑value pairs in your message. For example:
logger -p local0.info -t backup-job "event=finish status=success duration=35s backup_id=42"
Tools that parse logs can then extract fields like status or duration.
Rule: For custom syslog‑based logs, use facilities local0 to local7 and always assign a recognizable tag with -t. This keeps your messages separate from system logs and easier to filter.
Configuring Syslog to Route Custom Logs
If your system uses a traditional syslog daemon such as rsyslog, you can route messages with a specific facility or tag into their own files. This way, your custom logs are still handled by the system but are organized in dedicated locations.
A typical rsyslog configuration file resides in /etc/rsyslog.conf or in snippets under /etc/rsyslog.d/. To send all messages that use facility local0 to /var/log/custom-app.log, you can write a small configuration file.
Create /etc/rsyslog.d/custom-app.conf with content like:
if ($syslogfacility-text == 'local0') then /var/log/custom-app.log
& stop
This rule tells rsyslog to write any local0 message to /var/log/custom-app.log and then stop processing it further. Alternatively, using the traditional syntax:
local0.* /var/log/custom-app.log
After creating or modifying a configuration file, restart rsyslog:
sudo systemctl restart rsyslog
Now, any command using logger -p local0.info will write to /var/log/custom-app.log. You can verify this by sending a test message and checking the file.
To target more specific subsets of logs, match additional attributes. For instance, to filter by program tag:
if $programname == 'backup-job' then /var/log/backup-job.log
& stopThis gives you separate logs for different tools, even when they share a facility.
Rule: Always confirm that your custom log file path is writable by the syslog daemon and that it resides under /var/log unless you have a strong reason to place it elsewhere.
Creating Custom Logs With `systemd-journald` Fields
On systems that rely primarily on systemd-journald, you can enrich logs with extra metadata fields. When you use logger, systemd automatically sets fields such as _PID, _UID, _SYSTEMD_UNIT, and SYSLOG_IDENTIFIER.
You can set a stable identifier with -t, which becomes SYSLOG_IDENTIFIER:
logger -t backup-job "Backup started"You can query all logs for this identifier:
journalctl SYSLOG_IDENTIFIER=backup-job
For services managed by systemd, the unit name adds another powerful filter. Messages from a unit such as backup.service can be viewed with:
journalctl -u backup.service
Within a service, you do not have to call logger. Outputs written to stdout or stderr are captured by the journal. To make the log entries clear, prefix them yourself:
echo "level=info event=backup-start path=/data"Later you can filter in tools or monitoring systems by parsing these key‑value pairs.
You can also define how the journal should store logs for that service through the unit’s configuration, for example by setting SyslogIdentifier or changing StandardOutput to journal or journal+console. This keeps your custom logs tightly integrated with other system logs while remaining easy to isolate.
Application Level Custom Log Files
You might prefer your application to write to its own file instead of the system journal. This can be useful for cross‑platform programs, tools that need fine control over format, or when your environment does not allow direct access to systemd or syslog.
The simplest approach is to open a file under /var/log and append to it. Suppose you write a shell script that records events in /var/log/myapp.log:
#!/bin/sh
LOG_FILE="/var/log/myapp.log"
log() {
timestamp=$(date +"%Y-%m-%dT%H:%M:%S%z")
level="$1"
shift
message="$*"
echo "$timestamp level=$level msg=\"$message\"" >> "$LOG_FILE"
}
log info "Application started"
log warning "Low disk space on /data"
log error "Failed to connect to database"
The log function creates a simple structured record. The ISO‑like timestamp combined with key‑value pairs makes logs easy to parse. To use this in practice, ensure proper permissions. Typically, the script should run as a service account that has write access to that file, and the file should be created with appropriate ownership, for example owned by myapp:myapp with mode 640.
When running under systemd, you can instruct the service to manage file descriptors or use StandardOutput=file:..., but often it is cleaner to log to stdout and let systemd handle persistence through the journal.
Rule: Never let arbitrary users write directly to log files in /var/log. Restrict write access to the application or service account to protect integrity and prevent log tampering.
Designing Log Format and Structure
Custom log usefulness depends heavily on format. There is a trade‑off between human readability and machine readability. A clear structure greatly helps with indexing, search, and alerting.
A common pattern is to write one event per line with an explicit timestamp, level, and set of key‑value attributes. For example:
2026-01-08T18:35:20+0000 level=info event=login user=alice ip=192.0.2.15
2026-01-08T18:36:07+0000 level=error event=login-failed user=bob ip=198.51.100.23 reason="invalid password"This structure has several useful properties. Each line stands alone as a complete event, which simplifies processing. The key‑value style avoids brittle positional parsing, and quoting values with spaces avoids ambiguity.
You can also use JSON for even more structured logs:
{"ts":"2026-01-08T18:36:07+0000","level":"error","event":"login-failed","user":"bob","ip":"198.51.100.23","reason":"invalid password"}JSON logs are well supported by many log collection systems, but they are slightly harder to read by hand. Whatever format you choose, be consistent across your application.
It can also help to standardize your severity levels. A simple and common set is:
Recommended levels: debug, info, warning, error, critical.
Use debug for verbose diagnostics, info for normal events, warning for unusual but non‑fatal situations, error for failed operations, and critical for conditions requiring immediate attention.
Always include at least the following in each log event: a timestamp, a level, and enough context to understand what happened. Context usually means an event name, a user or resource identifier, and any parameters needed to reconstruct the scenario.
Integrating Custom Logs With Log Rotation
Custom log files in /var/log must not grow indefinitely. They should be rotated and optionally compressed so that disk space remains under control and old data is either archived or deleted. On most Linux systems, logrotate manages this process.
Custom logs can be hooked into logrotate by creating a configuration snippet under /etc/logrotate.d/. For example, suppose you created /var/log/myapp.log. You could write a file /etc/logrotate.d/myapp:
/var/log/myapp.log {
daily
rotate 14
compress
missingok
notifempty
create 640 myapp myapp
postrotate
systemctl reload myapp.service >/dev/null 2>&1 || true
endscript
}
This tells logrotate to process myapp.log daily, keep 14 old copies, compress rotated logs, skip if missing, avoid rotating empty files, and recreate the log file after rotation with the right permissions. The postrotate script reloads the service so that it reopens the log file after rotation.
If your application obtains a file descriptor once at startup and keeps it open, it must be able to reopen the file after rotation. Reload or restart hooks in logrotate solve this. If your program opens the file for every write, it automatically writes to the new file, but this is less efficient.
For logs that are written through syslog, rotation is usually handled by existing rules, such as /etc/logrotate.d/rsyslog. In that case you just need to ensure your custom log path is referenced in the appropriate logrotate configuration.
Custom Logging From Systemd Services
For services supervised by systemd, you can design custom logs without touching traditional syslog at all. A basic unit file might look like this:
[Unit]
Description=My Application Service
[Service]
ExecStart=/usr/local/bin/myapp
User=myapp
Group=myapp
SyslogIdentifier=myapp
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
Here, anything the program writes to stdout or stderr becomes part of the journal with SYSLOG_IDENTIFIER=myapp. You can then see your custom logs via:
journalctl -u myapp.service
journalctl SYSLOG_IDENTIFIER=myapp
Within your application, you still decide how the actual messages look. For instance, you can print lines in the structured formats already discussed. The journal will automatically add metadata such as _SYSTEMD_UNIT, _PID, _BOOT_ID, and the exact timestamp.
If you later decide that you need traditional flat files, you can add a journal forwarding or syslog bridge, or write a small tool that reads from the journal and writes filtered entries into a file.
For high‑volume services, consider tuning journald’s storage settings so that logs do not consume excessive disk space. This is done in the journald configuration files, not in the unit itself, but it is important when you create chatty custom logs.
Exposing Custom Logs for External Collection
Many organizations ship logs to remote systems for centralized analysis. When you create custom logs, think early about how they will be exported.
If you are logging through syslog facilities such as local0, the syslog daemon can forward those messages over the network. A simple rsyslog example for forwarding custom facility logs is:
local0.* @logserver.example.com:514
You can combine local routing and forwarding, so messages still go into /var/log/custom-app.log and are also sent to a remote log collector. Similar configurations exist for secure transport over TLS.
For custom files, external agents such as filebeat, fluent-bit, or other collectors can watch your log path and ship new lines to a central system. Those agents expect stable file paths, consistent formats, and rotation practices that they understand. Keeping your logs in /var/log and using standard rotation patterns simplifies integration with such tools.
If your logs use a structured format like JSON or key‑value pairs, remote systems can more easily index and query them. This is one of the reasons to invest time in designing the format before deploying your custom logging scheme.
Ultimately, creating custom logs is not just about writing lines to a file. It involves choosing how messages flow through the system, which tools will handle and store them, and how other people and programs will search, alert, and report on those logs. By aligning with system logging infrastructure, using clear formats, and planning for rotation and export, you can build custom logs that are reliable and useful throughout the lifetime of your system.