Kahibaro
Discord Login Register

3.7.5 Intrusion detection basics

Why Intrusion Detection Matters

Intrusion detection is about noticing when something or someone is doing what they should not be doing on your system. Firewalls, authentication policies, and SSH security try to prevent attacks. Intrusion detection accepts that some attacks or mistakes will still happen and focuses on spotting them early, limiting damage, and gathering information about what occurred.

On Linux, intrusion detection means monitoring activity such as logins, file changes, network use, and running processes, then alerting you if something looks suspicious. For a beginner in system administration, you do not need to build a fully mature security operations center. You should, however, understand the basic ideas behind detecting intrusions, and how common Linux tools fit into those ideas.

Intrusion detection does not replace secure configuration, firewalls, or good authentication. It complements them by answering the question: “Is anyone getting around my protections or misusing this system?”

Key Concepts: IDS, IPS, Host and Network

Two important terms appear frequently in discussions of intrusion detection: IDS and IPS. Another important distinction is between host based and network based systems.

An Intrusion Detection System, usually abbreviated IDS, watches activity and reports on potential attacks or misuse. It can generate alerts, log details, or feed information into other tools. By itself, an IDS does not block anything. Its role is detection and visibility.

An Intrusion Prevention System, or IPS, uses similar techniques but is placed so that it can automatically block or modify traffic or actions in response to what it sees. For example, it might drop packets that match a known attack signature or automatically add a firewall rule to block an IP address.

You will also encounter host based intrusion detection systems and network based intrusion detection systems. A host based IDS, usually written HIDS, runs on a single server or endpoint and focuses on what happens on that machine. It inspects logs, file changes, local processes, and sometimes system calls. A network based IDS, or NIDS, monitors traffic on a network segment, often from a dedicated sensor or from a tap or mirror port on a switch.

For Linux administrators, host based intrusion detection is often the first practical step, because you can install it directly on your servers without needing special network hardware or switch configuration.

What Intrusion Detection Tries to Catch

Intrusion detection tools look for patterns that suggest something has gone wrong, or that someone is trying to make it go wrong. Some of the common categories include:

Unusual authentication activity is a basic signal. Many failed SSH logins from the same IP or username can indicate a brute force attack. A login from a country where your users never work or at a very unusual time may be suspicious, especially on servers with a fixed set of expected users.

Unexpected changes to critical files, such as binaries in /bin or /usr/bin, configuration files in /etc, or startup scripts, can indicate that malware or an attacker has modified the system to maintain access or hide activity. Monitoring these files for unexpected changes is a central idea in host based intrusion detection.

Abnormal processes and resource usage also matter. A process running as root that you did not install, high CPU from an unknown command, or network connections from programs you do not recognize can all point to compromise or misuse.

Network oriented signs include traffic to known malicious destinations, scans across many ports, or use of unusual protocols. These are more in the domain of network based IDS, but a Linux server can still observe its own connections and provide clues.

Intrusion detection is not about perfection. It is about noticing patterns that are unlikely to appear during normal operations on your system, then using those clues to investigate further.

Detection Approaches: Signature and Anomaly

Intrusion detection systems often use a combination of two broad approaches: signature based detection and anomaly based detection.

Signature based detection relies on known patterns of malicious activity. For example, a particular HTTP request that matches a known exploit, or a specific sequence of system calls used by a rootkit. The IDS vendor or community provides a set of signatures, and the tool matches observed activity against those signatures.

This approach is very effective for common, well understood attacks. It generates fewer false positives when the signatures are well tuned. However, it can miss new attacks, custom malware, or attackers who use slightly modified techniques that do not match existing signatures.

Anomaly based detection tries to model what is normal for your system, then alerts when behavior deviates significantly from that baseline. For example, if a particular server usually sees two SSH logins a day and suddenly sees hundreds, that deviation can trigger an alert. Similarly, if a configuration file that never changes is modified, that is suspicious even if there is no signature for a specific attack.

Anomaly based detection can catch new or unknown attacks, but it tends to produce more false positives while you refine the baseline. On Linux, many host based tools implement a simple version of anomaly detection by treating any unexpected change to protected files as suspicious.

In simple environments, a pragmatic combination works well: use signature based tools where possible, and supplement them with focused anomaly checks, such as monitoring integrity of critical paths and basic patterns in logs.

File Integrity Monitoring on Linux

File integrity monitoring is a core technique in host based intrusion detection. The idea is simple. You record a trusted state of important files, then later you check whether anything has changed. The question is not only “has the file timestamp changed” but also “has the file content changed in any way.”

The trusted state is usually stored as cryptographic checksums, sometimes alongside file sizes, permissions, and ownership. A checksum is the result of running the file contents through a hash function such as SHA-256, which produces a fixed length string. If the file changes in any way, even by a single byte, the checksum will change.

If you denote the checksum function as $H(f)$ for file $f$, then the integrity check compares $H(f\_{\text{current}})$ with $H(f\_{\text{baseline}})$. If they differ, something has modified that file. Tools combine this with knowledge of which files should or should not change often.

On Linux, classic file integrity monitoring tools such as AIDE or Tripwire operate in this fashion. You choose directories and files that you consider critical. You generate a baseline database of checksums and metadata. Then you periodically run checks, often scheduled with cron, to compare current state with the baseline. Reports list new files, deleted files, and modified files.

File integrity monitoring is very powerful on systems that change infrequently, such as core infrastructure servers. On rapidly changing development servers, you must be more selective about what you monitor to avoid constant alerts. Focus on binaries, system directories, and fixed configuration paths. Application data and logs often change legitimately and are usually not monitored at this level.

If you build your file integrity baseline after a system is already compromised, then your “trusted” checksums will include malicious changes. Always establish baselines from a system you believe to be clean, and refresh them only after careful review.

Log Based Intrusion Detection

Linux logs a large amount of information about system activity, especially if system logging is properly configured. Authentication attempts, sudo usage, service failures, kernel messages, and many application events are written to files in /var/log or to the systemd journal.

A log based intrusion detection approach parses these logs automatically and looks for patterns that indicate attacks or misuse. Common examples include many failed login attempts from one IP, repeated sudo failures, sudden changes in service behavior, or known error messages that correspond to exploit attempts.

Tools like Fail2ban implement a narrow but very practical form of log based intrusion detection and reaction. They monitor specific log files, such as SSH logs, detect patterns like multiple failed logins within a short time window, and then apply firewall rules to block the offending IP address for some period.

At a more general level, host based intrusion detection systems like OSSEC or Wazuh can collect logs from many services, normalize them, and run rule sets that detect a wide range of suspicious patterns. They can also send logs to a central server for correlation, which becomes important when you administer many Linux machines.

For a beginner, the key habit is to be aware that logs are the primary raw material for intrusion detection, and to avoid disabling useful logging for the sake of convenience or disk space. Later, specialized tools can automate much of the analysis, but the underlying data must exist.

Network Focused Detection Around Linux

Network based intrusion detection systems typically operate at the network level, not strictly inside one Linux host. However, they are important to understand in a Linux context, because Linux servers both generate and receive network traffic, and Linux based tools are often used to implement NIDS.

Typical NIDS tools monitor traffic on one or more interfaces, decode network protocols, and compare payloads and behaviors against signatures or heuristics. When they detect suspicious activity, they log or alert. Examples include systems that recognize port scans, detect SQL injection attempts, or spot malware communication patterns.

Linux is a common platform for deploying such tools. A NIDS process can run on a dedicated Linux sensor connected to a mirror port on a switch or a network tap. Even without a full network IDS, you can use simpler tools on a Linux server to observe unexpected connections and traffic patterns. Commands to view active connections or listen to packets, for example, can help you confirm or investigate alerts generated elsewhere.

Intrusion detection on the network side and on the host side support each other. A host based alert about a new suspicious process combined with a network alert about connections from that host to a known malicious address provides a clearer picture than either signal alone.

Host Based IDS Tools in Practice

On Linux, host based intrusion detection tools usually combine several techniques. They may implement file integrity monitoring, log analysis, rootkit checks, and basic policy enforcement. You install an agent on each Linux machine you want to protect, and sometimes you also run a central manager to collect and correlate data.

A typical workflow might look like this. After installing the HIDS package, you define which files and directories should be monitored for changes, which log files to parse, and which rules to apply. The system begins watching for suspicious events. When an event matches a rule, the HIDS writes an alert to its own logs, sends a notification, or forwards details to a central console.

In addition to pure detection, some host based IDS tools perform simple active responses. For instance, they may add a firewall rule to block an IP address that triggers a series of failed login alerts, or temporarily disable a user account that shows suspicious activity. These actions blur the line toward intrusion prevention, but the core functionality remains detection and alerting.

For new Linux administrators, it is usually best to start with a conservative configuration. Monitor only clearly critical files, focus on important logs like authentication and sudo, and review the alerts manually. As you become more comfortable, you can expand the coverage and consider more automated responses.

False Positives, False Negatives, and Tuning

No intrusion detection system is perfect. Two types of errors are important to understand, because they influence how you configure and interpret your tools.

A false positive occurs when the system treats normal behavior as malicious. For example, an administrator legitimately runs many commands with sudo while performing maintenance, and the IDS raises an alert about possible privilege escalation. If false positives are too frequent, people will start to ignore alerts, which defeats the purpose of detection.

A false negative occurs when malicious activity happens but is not detected. This might happen because there is no matching signature, the baseline for anomaly detection was too broad, or logging was incomplete. False negatives are more dangerous, but you often only discover them after the fact.

Tuning an intrusion detection system is the process of adjusting rules, baselines, and thresholds to reduce false positives without allowing too many false negatives. On a Linux server, that might mean excluding a directory that changes frequently from file integrity monitoring, or slightly increasing the number of allowed failed SSH attempts before an alert triggers.

You should treat tuning as an ongoing task. Each time you receive an alert, ask whether it represents something you truly care about. If not, adjust the configuration to reduce noise. Over time, your intrusion detection system will become more accurate for your specific environment.

An intrusion detection system that nobody reads or maintains is functionally equivalent to having no intrusion detection at all. Reviewing alerts and tuning rules is part of basic system security practice.

Integrating Intrusion Detection into Daily Administration

For Linux administrators, intrusion detection should become a routine, not a one time project. At a basic level, this routine includes ensuring logging is enabled and preserved, reviewing key logs regularly, and setting up at least simple detection tools such as file integrity checks or log based blocking tools for exposed services.

You should consider where alerts go and who sees them. Storing alerts only on the local disk of the server that is under attack is risky, because an attacker might attempt to erase evidence. Forwarding alerts to another system, such as a central log server or mail address, increases the chance that you will notice and preserve evidence.

Intrusion detection is most valuable when combined with preparation. Decide in advance how you will respond to certain alerts. For instance, an alert about many failed SSH logins might lead you to examine logs more deeply and consider firewall changes. An alert about a modified system binary might trigger immediate isolation of the host from the network and a more formal incident response.

As you gain experience, you can grow from simple, host based tools on a few Linux machines to more coordinated setups that include centralized log analysis and correlation, as well as both host and network based detection. The basic concepts introduced here form the foundation for that progression and help ensure that security on a Linux system is not only about prevention, but also about awareness when things go wrong.

Views: 8

Comments

Please login to add a comment.

Don't have an account? Register now!