Table of Contents
Overview
In a multitasking operating system like Linux, processes must communicate and coordinate with each other and with the kernel. They need ways to notify each other of events, exchange data, and synchronize work. Two central mechanisms for this are signals and interprocess communication, usually shortened to IPC.
This chapter introduces signals as Linux’s lightweight event notification system, then explores the main IPC mechanisms you will encounter in practice. The focus is on what each mechanism does, when it is appropriate to use it, and how they differ from each other, not on deep programming details that belong in a programming course.
Signals: Asynchronous Notifications
Signals provide a simple way to send an asynchronous notification to a process or a group of processes. A signal does not carry a data payload in the usual sense. Instead, it delivers a small integer code that means “this kind of event just happened.”
Common examples include a user pressing Ctrl+C in a terminal, a child process terminating, or the kernel notifying a process that it touched invalid memory.
From the point of view of a process, a signal is like a software interrupt. The process may not be expecting it at a precise time. The signal can interrupt whatever the process is doing, then the process can handle it in some way.
Important rule: Signals are not a general data channel. They are primarily for notifications and control events, not for transmitting structured or large amounts of data between processes.
Standard Signal Types
Linux supports a set of standard signals, many inherited from traditional Unix. Each has a symbolic name beginning with SIG and a numeric value.
Some of the most important ones are:
SIGINT is sent when you press Ctrl+C in a terminal. By default, it asks the process to terminate. Well behaved interactive programs can catch it and clean up before exiting.
SIGTERM is the standard polite request to terminate a process. Tools like kill send SIGTERM by default. A process can catch this signal to shut down gracefully, for example by saving state and closing files.
SIGKILL is the non-negotiable termination signal. The kernel immediately destroys the process. It cannot be blocked, ignored, or handled. Use it only when a process will not respond to softer signals.
SIGSTOP and SIGTSTP stop a process. Ctrl+Z in a terminal usually sends SIGTSTP. SIGSTOP cannot be caught or ignored. The shell can later resume a stopped process with SIGCONT.
SIGSEGV means a segmentation fault. The process attempted to access invalid memory. This normally indicates a serious bug. The kernel typically kills the process and may dump core for debugging.
SIGCHLD is sent to a parent process when a child process stops or terminates. Programs that spawn worker processes rely on this signal to know when children have finished.
SIGHUP historically meant that a terminal line was disconnected. In modern use it often means “reload your configuration” for daemons and services.
Process Responses to Signals
Each signal has a default action defined by the system. Default actions include terminating the process, stopping it, continuing it, or ignoring the signal.
A process can usually change how it handles most signals through signal handlers. A signal handler is a function that runs when a specific signal arrives. In C programs, handlers are registered with APIs like signal() or sigaction(). Higher-level languages provide their own abstractions, but the concept is the same.
For example, a server might install a handler for SIGTERM that logs a shutdown message, closes network sockets, and only then exits. A shell installs a handler for SIGCHLD so that it can reap finished child processes and update job status.
There are two important exceptions. SIGKILL and SIGSTOP cannot be caught, blocked, or ignored. The kernel reserves them as absolute control mechanisms.
Important rule: Never rely on fully arbitrary work inside signal handlers. Allowed operations inside handlers are restricted because the signal interrupts normal control flow. Complex logic or actions that are not “async-signal-safe” can corrupt program state.
Sending Signals from the Shell
For system administrators and users, the typical interaction with signals is through shell tools. The main one is kill, despite its name, a general signal sender.
To send the default SIGTERM to a process with PID 1234, you use:
kill 1234To send a specific signal, you specify the name or number:
kill -SIGKILL 1234
kill -9 1234
Signals can be sent to process groups, which is especially useful for shell job control. When you type Ctrl+C in a terminal, the shell sends SIGINT to the entire foreground process group, which stops all foreground commands in that pipeline.
Commands like pkill and killall let you send signals to processes by name rather than by PID. For example, pkill -HUP nginx asks all nginx processes to reload their configuration.
Real-time Signals
Linux provides a range of “real-time” signals with numeric values that are higher than the standard ones. These real-time signals have guaranteed ordering and can queue multiple instances, while traditional signals of the same type can be collapsed into one.
Real-time signals are mostly used by specialized or real-time applications, and they can be associated with small integer or pointer values. They provide more flexibility than standard signals, but they are still not a substitute for full data channels like pipes or sockets.
IPC: Interprocess Communication
Signals handle notifications, but they are not suited for sending data or coordinating complex work. For that, Linux provides a collection of IPC mechanisms.
Several design dimensions distinguish different IPC mechanisms. Some are unidirectional, others bidirectional. Some preserve message boundaries, others present a continuous byte stream. Some work only between related processes with a common ancestor, others work across unrelated processes or even across the network.
Choosing the right IPC method depends on performance needs, data patterns, security boundaries, and how complex the communication must be.
Pipes
Pipes are one of the oldest and simplest IPC tools in Unix-like systems. They allow one process to send a stream of bytes to another. From the shell user’s perspective, you use pipes with the | operator in commands like:
ls | grep txt
Here, the shell creates a pipe between ls and grep. ls writes its output to the pipe, and grep reads it from the other side. The processes do not need to know they are using a pipe instead of a terminal. They just read from standard input and write to standard output.
Kernel level, an anonymous pipe is a unidirectional data channel that exists only as long as the processes use it. It is usually created by a parent that then forks children which inherit the pipe endpoints. This makes pipes ideal for simple data flows within a pipeline of related processes.
Named pipes, also called FIFOs, extend the idea by giving a pipe a name in the filesystem. You create one with a command like:
mkfifo /tmp/myfifo
Any process can open /tmp/myfifo for reading or writing, even if it is not related to the creator. Named pipes still behave as byte streams, but they enable communication based solely on a path.
Important rule: Pipes are stream oriented. They do not preserve message boundaries. If you need discrete messages or records, you must impose your own framing inside the byte stream.
Message Queues
Message queues provide a way for processes to send and receive discrete messages rather than raw byte streams. Each message queue stores messages as separate units. A receiver reads whole messages, not arbitrary bytes.
Classical System V message queues are identified by numeric keys and managed through specific system calls. They allow queue-based semantics and basic prioritization of messages. POSIX message queues are a more modern variant that uses pathname like identifiers and provides features like notification when new messages arrive.
In either case, message queues are useful when you want producers and consumers to exchange structured items, and you want the kernel to handle storage and delivery order. They are also good when you need to decouple the sender and receiver in time, because the queue can buffer messages until the receiver is ready.
From an administrator’s point of view, you might encounter tools such as ipcs and ipcrm for inspecting and removing System V IPC objects like message queues, semaphores, and shared memory segments.
Shared Memory
Shared memory is one of the most efficient IPC methods available. Instead of sending data between processes, the kernel maps the same memory region into the address space of multiple processes. Each process can read and write that region as if it were its own memory.
Because there is no copying once the mapping is established, shared memory is very fast and suitable for high volume or low latency data exchange, such as between a complex application and a helper process.
System V shared memory and POSIX shared memory are two main interfaces for this on Linux. System V shared memory segments are identified by keys and are managed with calls like shmget and shmat. POSIX shared memory objects use names that appear under paths like /dev/shm.
The speed of shared memory comes with a cost. When multiple processes can read and write the same region, you must coordinate access to avoid data races and corruption. Shared memory alone does not provide mutual exclusion or ordering guarantees.
Important rule: Shared memory always requires synchronization. Combine it with locks or other synchronization primitives to ensure that only one process modifies a given piece of data at a time.
Semaphores and Synchronization Primitives
While many IPC mechanisms move data, others coordinate access and ordering. Semaphores are one of the most traditional synchronization tools in Unix-like systems.
A semaphore, conceptually, is a counter with two operations. One operation decreases the counter and may block if the counter is zero. The other increases the counter and can wake a blocked process. By guarding critical sections of code with semaphore operations, processes can ensure exclusive or limited access to shared resources.
Linux provides both System V semaphores and POSIX semaphores. System V semaphores can control access across unrelated processes using numeric identifiers. POSIX semaphores can use named objects or shared memory to coordinate among processes or threads.
Other synchronization primitives exist, such as mutexes and condition variables. These are typically used inside multi-threaded programs, but with appropriate shared memory, they can be extended to multiple processes as well.
Unix Domain Sockets
Sockets are often associated with network communication, but they can also be used for local IPC on the same machine. Unix domain sockets are socket endpoints that use the filesystem namespace rather than IP addresses.
A Unix domain socket is created at a path like /run/service.sock. Any process that can access that file can connect to it and exchange data with the server process that owns the listening socket.
Unix domain sockets support both stream semantics similar to TCP and datagram semantics similar to UDP. They are widely used in Linux for interactions between daemons and helper tools. For example, many system services expose management or control interfaces over Unix domain sockets instead of open network ports. This improves security, since file permissions can restrict access.
One powerful feature of Unix domain sockets is the ability to pass file descriptors between processes. This lets one process give another an already open file or socket, which is crucial in some service architectures, such as those involving privilege separation.
Compared with pipes, Unix domain sockets are more flexible for complex bidirectional conversations and do not require a parent-child relationship. Compared with network sockets, they avoid network protocol overhead and stay within the local kernel.
Network Sockets and IPC Across Hosts
While this chapter focuses on local IPC inside one Linux system, you should recognize that network sockets extend the same concept over networks. Two processes on different machines can communicate through TCP or UDP sockets identified by IP addresses and port numbers.
On Linux, sockets form a unified abstraction. Whether you use localhost with TCP, a remote host, or a Unix domain socket file, the same basic APIs apply. Administrators configure and monitor such communications with tools covered elsewhere, such as ss, netstat, ip, and firewall systems.
Using network sockets as an IPC mechanism has obvious advantages. It allows separate machines to collaborate and lets you design distributed systems. The trade off is higher latency and more complexity.
Signals Compared with Other IPC
Signals and IPC mechanisms serve different purposes, although they sometimes work together. A process might use shared memory or sockets for main data exchange but rely on signals for control events like “shut down now” or “configuration changed.”
Signals are asynchronous, lightweight, and limited to expressing a small integer code. Their power is in control, not in data carriage. Pipes, message queues, shared memory, and sockets carry the data itself.
One way to think about it is that signals answer the question “what just happened,” whereas IPC data channels answer “what information do we need to share.” In system design, you often use both to build robust, coordinated software components.
Understanding these building blocks helps you interpret how complex Linux services are structured, how daemons communicate with each other and with clients, and how tools like systemd, databases, and web servers coordinate their work inside the operating system.