Table of Contents
Why Intrusion Detection Matters
Intrusion detection is about noticing when something bad or unexpected is happening on your system, as early as possible. Firewalls and hardening try to prevent attacks; intrusion detection tries to spot when prevention has failed or is being bypassed.
On a Linux system, intrusion detection answers questions like:
- Has someone gained unauthorized access?
- Is malware or a rootkit active?
- Are important files being modified unexpectedly?
- Are services being probed or abused?
You won’t build a full security operations center in this chapter, but you’ll learn the core ideas and where common Linux tools fit.
Types of Intrusion Detection
Intrusion detection systems (IDS) are commonly classified along two axes:
Host-based vs Network-based
- Host-based IDS (HIDS)
- Runs on an individual machine.
- Monitors logs, file integrity, running processes, configuration changes, etc.
- Examples:
AIDE,OSSEC,Wazuh,auditd-based setups. - Network-based IDS (NIDS)
- Monitors network traffic (usually at a gateway, router, or tap).
- Looks for suspicious packets, patterns, or connections.
- Examples:
Snort,Suricata,Zeek.
As a Linux sysadmin, you’ll almost always start with host-based intrusion detection on your servers and may later integrate network-based tools in larger environments.
Signature-based vs Anomaly-based
- Signature-based detection
- Uses known patterns (signatures) of malicious activity.
- Similar to antivirus rules: “If you see pattern X, it’s an attack.”
- Pros: Precise for known attacks; fewer false alarms for those.
- Cons: Won’t detect new/unknown attack types.
- Example: Matching a sequence of bytes in a packet, or a known malicious command pattern in logs.
- Anomaly-based detection
- Learns or defines what “normal” looks like and alerts on deviations.
- Examples:
- Sudden spike in outbound connections from a web server.
- A user account logging in at strange times from unusual IPs.
- Pros: Can catch new, unknown attacks.
- Cons: More false positives; tuning is required.
Most practical systems use both: signatures for known threats and anomaly rules/baselines for the rest.
Key Data Sources for Intrusion Detection on Linux
Intrusion detection relies on observability: you can only detect what you monitor. On Linux, common data sources include:
System Logs
- Authentication logs (commonly
/var/log/auth.logor/var/log/secure): - Failed/successful SSH logins
sudousage- Account lockouts or password changes
- System logs (e.g.
/var/log/syslog,/var/log/messages): - Service starts/stops
- Kernel messages
- Hardware and network events
- Application logs (e.g.
/var/log/nginx/,/var/log/httpd/, database logs): - Suspicious HTTP requests
- Repeated login failures to web apps
- SQL errors that may indicate injection attempts
Basic detection can start simple: tools that scan these logs for patterns (e.g. repeated failures from the same IP) and react.
File Integrity
Compromise often involves changing files:
- Adding or altering binaries (
/bin,/usr/bin) - Modifying configuration files (
/etc) - Planting backdoors in web directories
File integrity monitoring tools create a database of checksums and other metadata for critical files, then periodically compare:
- If a checksum changes unexpectedly, something modified the file.
- If a new executable appears in a sensitive directory, that’s suspicious.
Linux tools used for this include AIDE, tripwire, and others.
Processes and System State
Abnormal or malicious activity often shows in:
- Unexpected processes or services running
- Processes running from unusual paths (like
/tmpor user home directories) - High CPU/network usage by non-typical services
While basic monitoring tools (covered elsewhere) help you see this, intrusion detection frameworks can automate checks and alerting.
Network Connections
Even on a single host, outbound and inbound connections tell a story:
- Unexpected outbound connections to unknown IPs or countries
- Unusual listening ports
- High volume of connections from a small set of IPs (possible brute force)
Host-based IDS agents can watch for these patterns and log or alert.
Common Approaches and Tools (Conceptual)
This is an overview of how typical Linux intrusion detection stacks are built conceptually, without going into full deployment steps.
Log-based Detection (Example: Fail2ban-style Behavior)
Concept:
- Continuously read log files.
- Look for patterns that match rules (regular expressions).
- Take automatic action when patterns are seen frequently.
For example, a rule could:
- Watch SSH logs for repeated failed logins from the same IP.
- If more than $N$ failures in $T$ minutes, ban that IP via firewall.
In practice, tools follow a pattern:
- Jails/filters define what logs to watch and what to match.
- Actions define what to do (block IP, send email, write to a log).
Even if you use different software, this “pattern → threshold → response” model is very common.
File Integrity Monitoring (Example: AIDE-style Behavior)
Conceptual workflow:
- Initialize a database:
- Record file metadata (path, permissions, owner, hashes) for selected directories.
- Store the database safely:
- Ideally off-host or read-only, so an attacker can’t modify it easily.
- Run periodic checks:
- Compare current filesystem state to the database.
- Alert or report differences:
- New, changed, or removed files.
Key design choices:
- What to monitor:
/etc, system binaries, web app code directories, scripts, cron jobs. - How often: trade-off between detection speed and load.
- How to handle expected changes: updates and admin work cause legitimate changes, so you need a process to update the baseline.
Host-based IDS Frameworks (OSSEC/Wazuh-style Behavior)
These combine many functions:
- Log collection and correlation
- File integrity monitoring
- Rootkit checks
- Registry/config monitoring (on non-Linux platforms)
- Active responses (block IP, kill processes, send alerts)
Architecture concepts:
- Agent on each host:
- Collects logs, file changes, local events.
- Manager/server:
- Receives data from agents.
- Applies rules and correlation.
- Generates alerts and dashboards.
This multi-host approach is more common in larger environments, but the concepts (rules, agents, correlation) are the same even for a single server.
Network-based Detection (Snort/Suricata-style Behavior)
At a conceptual level, a NIDS:
- Captures traffic:
- From a network interface in promiscuous mode or via a tap/mirror port.
- Decodes protocols:
- Understands TCP, HTTP, DNS, etc.
- Applies rules/signatures:
- Rules might detect:
- Known malware command-and-control patterns.
- SQL injection attempts.
- Port scans or weird protocol usage.
- Generates alerts:
- Writes to log files or sends events to a SIEM.
As a Linux admin, you might deploy such a tool on:
- A dedicated sensor VM in the same network
- A gateway machine that all traffic passes through
Basic Intrusion Detection Practices for a Single Linux Host
Even before deploying complex tools, you can apply simple, practical habits:
1. Centralize and Regularly Review Logs
- Make sure important services log to predictable locations.
- Use basic tools (e.g.
journalctl,less,grep) to spot: - Repeated authentication failures
- Sudden spikes in error logs
- Even better: send key logs to a remote logserver so an attacker can’t easily erase them.
2. Establish Baselines
You can’t detect anomalies if you don’t know what “normal” looks like:
- Typical CPU, memory, and disk I/O usage for services
- Normal range of incoming connections and IP ranges
- Usual number of authentication failures per day
Once you know baselines, unusual spikes stand out, even with simple tools.
3. Protect Audit and Log Integrity
If an attacker can freely tamper with logs, intrusion detection becomes unreliable.
Basics:
- Restrict who can read and write log directories (usually
rootand specific service users). - Avoid giving unnecessary
sudoaccess that allows log deletion. - Consider forwarding logs off the host (e.g. to a central log server or logging service).
4. Use Alerting, Not Just Logging
Logs that no one reads don’t provide detection. Even for very small setups:
- Use tools or simple scripts to:
- Email you on specific log patterns.
- Highlight daily summaries of suspicious events.
- Even minimal alerts (e.g. “10 failed SSH logins in 1 minute”) dramatically improve detection time.
Interpreting and Handling Intrusion Alerts
Detection is only half the story; you also need a basic idea of what to do:
- Validate the alert:
- Check the raw log entries or event details.
- Distinguish between:
- Benign misconfigurations or user mistakes.
- Real threats (e.g. brute-force attempts, web exploit attempts).
- Decide on containment:
- For network-based events: block offending IPs at the firewall.
- For suspected host compromise:
- Consider isolating the host from the network.
- Preserve logs and evidence before rebooting or wiping.
- Tune your rules:
- If an alert is noisy but harmless, adjust the rule to reduce false positives.
- If something bad got through without an alert, create or adjust rules to catch similar events in the future.
Over time, intrusion detection becomes an iterative tuning process: observe → alert → respond → refine.
Limitations and Common Pitfalls
Intrusion detection is powerful, but not magic. Be aware of:
- False positives:
- Too many noisy alerts lead to “alert fatigue”; admins stop paying attention.
- Careful tuning and sane thresholds are crucial.
- False negatives:
- New or stealthy attacks may evade signatures and simple anomaly rules.
- That’s why layered defenses still matter.
- Local-only deployment:
- If detection runs only on the compromised machine and the attacker gains
root, they may disable or modify your IDS. - External logging and monitoring add resilience.
- No automatic remediation:
- Automatically blocking things can break legitimate traffic if misconfigured.
- Start with alert-only mode, then gradually enable automated responses where you understand the impact.
Building a Simple Intrusion Detection Mindset
As a beginner, focus less on complex tools and more on developing habits:
- Think: “If this system were compromised, how would I know?”
- For every exposed service, ask:
- Where do its logs go?
- How would I detect abuse or exploitation?
- Gradually introduce:
- Log-based alerting (for repeated authentication failures, errors).
- File integrity checks for critical configs/binaries.
- Simple blocking rules against obvious brute-force behavior.
From there, you can grow into more advanced host and network IDS tools and eventually integrate them into broader security workflows.