Table of Contents
Why Create Custom Logs?
System logs and audit logs cover a lot, but they don’t know about:
- Your application’s internal state and business logic
- Domain-specific events (payments, orders, access to sensitive records)
- Custom scripts, automation, and maintenance tasks
Custom logs let you:
- Record exactly the events you care about
- Control format and verbosity
- Integrate with centralized logging, SIEM, or monitoring
- Troubleshoot your own tools and services
This chapter focuses on how to create and manage custom logs in a Linux environment, and how to hook them into the existing logging stack.
Designing Your Custom Logs
Before writing any code, decide:
What to log
Typical categories:
- Events: “user X did Y”, “job Z started/finished”
- Errors/exceptions: failures, unexpected conditions
- Security-relevant actions: logins, role changes, config changes
- Performance indicators: execution time, queue length, retries
Avoid:
- Plaintext passwords, full credit card numbers, full auth tokens
- Large binary blobs (images, zip files) directly in logs
Log levels
Stay consistent with common levels:
DEBUG– detailed diagnostics, disabled in productionINFO– normal operations and high-level eventsWARNING– unexpected but non-fatal situationsERROR– failed operations that affect a featureCRITICAL/ALERT– system-wide impact, requires immediate action
Decide which levels are used where, and stick to them.
Log format
Common choices:
- Plain text (human-friendly)
Example:
2025-01-01T12:00:00Z INFO user=alice action=login result=success - Structured text (machine-friendly)
- Key-value pairs:
time=... level=INFO user=alice action=login - JSON:
{"time":"...","level":"INFO","user":"alice","action":"login"}
If you plan to use centralized logging and searching, structured logs (especially JSON) are much easier to parse.
Basic Logging from Shell Scripts
For scripts, you usually have three goals:
- Write to a dedicated log file
- Optionally send to syslog/journal
- Include timestamps and levels
Simple file-based logging
#!/usr/bin/env bash
LOG_FILE="/var/log/myapp/backup.log"
# Ensure directory exists
mkdir -p "$(dirname "$LOG_FILE")"
# Optionally restrict permissions
chmod 750 "$(dirname "$LOG_FILE")"
log() {
local level="$1"; shift
local msg="$*"
# ISO 8601 timestamp
local ts
ts="$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
printf "%s [%s] %s\n" "$ts" "$level" "$msg" >> "$LOG_FILE"
}
log INFO "Backup job started"
if rsync -a /data/ /backup/; then
log INFO "Backup completed successfully"
else
log ERROR "Backup FAILED with exit code $?"
fiKey points:
- Use
mkdir -pto ensure log directories exist. - Use absolute paths for log files.
- Add timestamps in UTC (
-u) to avoid confusion across time zones.
Logging to syslog from a script
Using the logger command:
#!/usr/bin/env bash
APP_NAME="my-backup"
log() {
local level="$1"; shift
local msg="$*"
logger -t "$APP_NAME" -p "user.$(echo "$level" | tr '[:upper:]' '[:lower:]')" "$msg"
}
log INFO "Backup started"
# ...
log ERR "Backup failed: $reason"Notes:
-tsets the tag (program name).-psets facility and priority, e.g.user.info,user.err.- These logs go to syslog/journald and can be filtered or forwarded like any other service.
You can then view them with:
journalctl -t my-backup- or by configuring rsyslog/syslog-ng to send them elsewhere.
Custom Application Logs with syslog/journald
If your application runs as a service, integrating with syslog/journald is often better than writing random files under /var/log.
Sending logs to syslog directly (C-style API concept)
Many languages have libraries that wrap the system syslog(3) API. The pattern is:
- Open syslog with an identifier
- Log messages with a level
- Close when done (optional)
Conceptual flow:
openlog("myapp", LOG_PID, LOG_USER);syslog(LOG_INFO, "Started processing");closelog();
In higher-level languages (Python, Go, etc.), you use built-in logging libraries that support syslog handlers/transports.
Journald-native logging
Systemd-aware applications can log directly to journald using:
sd_journal_print()(C)systemd-journaldlogging drivers (for containers)- Environment variables like
SYSTEMD_LOG_LEVELandSYSTEMD_LOG_TARGETwhen supported
Once logs are in the journal, you can:
- Filter by unit:
journalctl -u myapp.service - Filter by field:
journalctl SYSLOG_IDENTIFIER=myapp
Defining Log Locations and Permissions
Custom logs usually live under /var/log:
- Application-specific:
/var/log/myapp/ - Service-specific:
/var/log/myapp/service.log
Key considerations:
Directory creation and ownership
For a service user myapp:
# As root (e.g. in install script or package post-install)
install -d -m 750 -o myapp -g myapp /var/log/myappThis ensures:
- Only
myapp(and root) can write. - Logs aren’t world-readable if they contain sensitive info.
File permissions
When you create log files manually, set permissions explicitly:
touch /var/log/myapp/app.log
chown myapp:myapp /var/log/myapp/app.log
chmod 640 /var/log/myapp/app.log
If your application creates the file, make sure its process runs as myapp and sets a safe umask (e.g. umask 027).
logrotate for Custom Logs
To avoid huge, unmanageable files, integrate your logs with logrotate.
Basic logrotate config for a custom log file
Create /etc/logrotate.d/myapp:
/var/log/myapp/app.log {
weekly
rotate 12
compress
delaycompress
missingok
notifempty
create 640 myapp myapp
postrotate
# Reload or signal your daemon if needed
systemctl kill -s HUP myapp.service 2>/dev/null || true
endscript
}Meaning:
weekly: rotate once per week.rotate 12: keep 12 archives (about 3 months).compress/delaycompress: compress older logs.missingok: don’t complain if file is absent.notifempty: don’t rotate empty files.create 640 myapp myapp: create a new log after rotation, with correct perms.postrotate: run commands after rotating (e.g. signal service to reopen logs).
Make sure your application handles SIGHUP or uses reopen-on-demand patterns if it keeps the log file open.
Custom Log Fields and Correlation
To make logs useful for debugging and incident analysis:
Use correlation IDs
Generate a unique ID per request/job/workflow and include it in all log entries related to that unit of work:
correlation_id=123e4567-e89b-12d3-a456-426614174000
Example log line:
2025-01-01T12:00:30Z INFO correlation_id=... user=alice action=create_order order_id=42
This allows you to filter logs by correlation ID and see the full chain of events.
Include context
Beyond timestamp and level, useful fields include:
user,roleremote_addr,user_agentrequest_path,methodservice,component,instanceerror_code,retry_count,duration_ms
Use consistent names to make searching easier.
Application-Level Logging Patterns
Separating logs by concern
Common patterns:
- Access logs: one line per request (web servers, APIs)
- Error logs: stack traces, internal errors
- Audit logs: security-sensitive operations
You can:
- Use separate files:
/var/log/myapp/access.log,/var/log/myapp/error.log,/var/log/myapp/audit.log - Or use one stream with type fields:
log_type=access|error|audit
Log verbosity control
Provide configuration for:
- Minimum log level (
INFO,DEBUG, etc.) - Destinations (file, syslog, stdout)
- Format (plain vs JSON)
Common approaches:
- Config file (e.g.
/etc/myapp/config.yaml) - Environment variables (
MYAPP_LOG_LEVEL=debug)
Custom Logs in systemd Service Units
Systemd can manage where your logs go, especially for services you run.
Logging to journald via stdout/stderr
If your service writes logs to standard output/error, systemd captures them:
Unit example /etc/systemd/system/myapp.service:
[Unit]
Description=My Custom Application
[Service]
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/myapp-binary
Restart=on-failure
# Optional tuning
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.targetThen read logs with:
journalctl -u myapp.servicejournalctl -u myapp.service -f(follow)
This removes the need for manual file handling and lets you centralize logs easily.
Redirecting to a dedicated file from systemd
If you prefer files:
[Service]
User=myapp
Group=myapp
ExecStart=/opt/myapp/myapp-binary
StandardOutput=append:/var/log/myapp/app.log
StandardError=append:/var/log/myapp/app-error.log
Systemd creates and appends to these files; combine this with logrotate to manage size.
Integrating Custom Logs with Centralized Logging
Once you have custom logs, you often want them in:
- ELK/Elastic Stack (Elasticsearch + Logstash + Kibana)
- Loki + Grafana
- Graylog, Splunk, or a SIEM
Common integration patterns:
- Filebeat/Fluentd/Fluent Bit:
- Configure them to watch
/var/log/myapp/*.log - Parse JSON or key-value lines
- Journald export:
- Use
journalctl --output=jsonor native integrations to ship all service logs - Syslog forwarding:
- RSYSLOG/Syslog-NG rules to forward messages with
SYSLOG_IDENTIFIER="myapp"
For structured logs, define a consistent schema (field names and types) to make dashboards and alerts easier to build.
Security Considerations for Custom Logs
When designing custom logs:
- Avoid sensitive data
- Mask or hash PII and secrets:
user_email=hash(...),token=redacted- Control access
- Owner/group and mode should reflect sensitivity:
chmod 640or stricter- Group membership should be limited to admins and service accounts.
- Integrity
- For high security, consider:
- Append-only filesystems
- Remote logging (so attackers can’t tamper with local logs easily)
- Cryptographic signing or checksums (integrity monitoring is covered elsewhere)
- Retention
- Balance troubleshooting needs with privacy and disk usage.
- Configure
logrotateretention to match policy.
Testing and Validating Custom Logs
Before relying on your logging in production:
- Trigger known events
- Run test actions and ensure they appear as expected.
- Check timestamps and time zones
- Verify order and correlation with other logs.
- Check rotation
- Force rotation:
logrotate -f /etc/logrotate.d/myapp- Ensure the application continues logging after rotation.
- Searchability
- Try typical queries you’d do during an incident:
- By user, by action, by correlation ID, by error code.
- Performance impact
- Heavy synchronous logging can slow down applications.
- Consider:
- Asynchronous logging
- Buffered writes
- Reduced log level in hot paths
By designing, implementing, and integrating custom logs carefully, you ensure that your Linux systems and applications are observable, diagnosable, and compatible with your broader logging and auditing strategy.