Kahibaro
Discord Login Register

Point-to-point communication

Core ideas of point-to-point communication

In distributed-memory parallel programming, and MPI in particular, point-to-point communication means data is transferred explicitly between exactly two processes: one sender and one receiver. This is in contrast to collective operations, where many processes participate in one call.

Point-to-point communication is used for:

At a high level, each message has:

MPI provides matching pairs of calls – send on one process, receive on another – that together complete a point-to-point operation.

Basic MPI point-to-point operations

Blocking send and receive

The simplest operations are blocking:

Typical usage pattern between two processes:

Example skeleton in C-like pseudocode:

int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
    double x = 3.14;
    MPI_Send(&x, 1, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD);
} else if (rank == 1) {
    double y;
    MPI_Recv(&y, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}

Key points about blocking operations:

Matching rules

For a message transfer to succeed correctly, the following must match between the send and receive:

If any of these are inconsistent, you may get incorrect results, deadlocks, or runtime errors.

Wildcards: `MPI_ANY_SOURCE` and `MPI_ANY_TAG`

Point-to-point communication allows flexible reception using wildcards:

These can be specified in MPI_Recv when you don’t know exactly which process or tag will send the next message.

Example:

MPI_Status status;
MPI_Recv(buf, n, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG,
         MPI_COMM_WORLD, &status);
/* After the receive, you can inspect who sent the message and with which tag */
int src  = status.MPI_SOURCE;
int tag  = status.MPI_TAG;
int count;
MPI_Get_count(&status, MPI_DOUBLE, &count);

Wildcards are powerful for dynamic or irregular communication patterns, but:

Common communication patterns

Point-to-point operations are used to implement a variety of patterns:

These patterns can be built from simple sequences of sends and receives. Higher-level MPI routines (collectives and neighborhood collectives) often internally rely on point-to-point operations.

Blocking vs non-blocking communication

Point-to-point APIs are often provided in both blocking and non-blocking variants. Knowing when to use which is central to performance and correctness in distributed-memory programs.

Blocking operations: simplicity first

Blocking operations (MPI_Send, MPI_Recv) are conceptually straightforward:

But they can easily cause:

Non-blocking operations: overlapping communication and computation

Non-blocking operations, such as MPI_Isend and MPI_Irecv, allow you to initiate a communication and then continue doing useful work while the transfer proceeds in the background. Completion is checked later using MPI_Wait or MPI_Test.

Basic workflow:

  1. Post receives with MPI_Irecv
  2. Initiate sends with MPI_Isend
  3. Perform independent computation
  4. Complete communication with MPI_Wait / MPI_Waitall (or MPI_Test variants)

Example skeleton:

MPI_Request reqs[2];
/* Post non-blocking receive */
MPI_Irecv(recvbuf, n, MPI_DOUBLE, src, tag, MPI_COMM_WORLD, &reqs[0]);
/* Post non-blocking send */
MPI_Isend(sendbuf, n, MPI_DOUBLE, dst, tag, MPI_COMM_WORLD, &reqs[1]);
/* Do some useful computation here that does not touch sendbuf/recvbuf */
/* Ensure communication is completed before using the data */
MPI_Waitall(2, reqs, MPI_STATUSES_IGNORE);

Important rules for non-blocking operations:

Message tags and logical channels

Message tags are integer labels attached to each point-to-point message. They act like separate "channels" on the same source/destination pair.

Applications commonly use them for:

Design considerations:

Ordering guarantees and message matching

MPI guarantees certain ordering properties for point-to-point messages:

More precisely:

Practically:

Common pitfalls in point-to-point communication

Deadlocks from mismatched ordering

Deadlocks can easily arise when processes wait on each other in a cycle. For example, two processes both call blocking sends to each other before posting receives.

Simple ways to avoid this in point-to-point patterns:

Deadlocks and detailed patterns are covered more deeply elsewhere, but they frequently originate in incorrect point-to-point logic.

Buffer misuse with non-blocking operations

Misusing send or receive buffers with non-blocking calls is a frequent error:

Defensive habits:

Incorrect message sizes or datatypes

Point-to-point operations require agreement on the data layout:

When message size may vary:

Designing point-to-point communication in applications

Point-to-point communication design has a big impact on scalability and maintainability. When planning communication in a distributed-memory application:

Higher-level communication mechanisms (neighbor collectives, graph communicators, domain-specific libraries) are often built atop these same point-to-point concepts. A solid understanding of point-to-point behavior is therefore essential to use these tools effectively and to debug issues when they arise.

Views: 12

Comments

Please login to add a comment.

Don't have an account? Register now!