Kahibaro
Discord Login Register

Amdahl’s Law

Intuition and Motivation

Amdahl’s Law answers a practical question:

If I speed up part of my program (e.g., by using more cores), how much faster can the entire program become?

Even if most of your code is parallelized, any remaining serial (non-parallel) portion limits the total speedup. This law is fundamental when deciding:

It is primarily a strong scaling concept (fixed total problem size, increasing resources).

Basic Formulation

Assume:

Normalize the original execution time (on 1 processor) to $T_1 = 1$. Then:

Total time on $N$ processors:

$$
T_N = f + \frac{1-f}{N}
$$

The speedup $S(N)$ is defined as:

$$
S(N) = \frac{T_1}{T_N} = \frac{1}{f + \frac{1-f}{N}}
$$

As $N \to \infty$:

$$
\lim_{N \to \infty} S(N) = \frac{1}{f}
$$

So the maximum possible speedup is bounded by the inverse of the serial fraction.

Interpreting the Serial Fraction

The serial fraction $f$ includes:

Example interpretations:

This illustrates how even a small serial portion can drastically cap scalability.

Numerical Examples

Assume $f = 0.05$ (5% serial, 95% parallel):

$$
T_4 = 0.05 + \frac{0.95}{4} = 0.05 + 0.2375 = 0.2875
$$

$$
S(4) = \frac{1}{0.2875} \approx 3.48
$$

$$
T_{16} = 0.05 + \frac{0.95}{16} = 0.05 + 0.059375 = 0.109375
$$

$$
S(16) = \frac{1}{0.109375} \approx 9.14
$$

$$
S_{\max} = \frac{1}{0.05} = 20
$$

Notice how going from 4 to 16 processors gives a noticeable improvement, but you are already far from the theoretical max (20×), and gains diminish as you increase $N$.

Diminishing Returns

Amdahl’s Law shows why speedup flattens as you add processors:

Rewrite the speedup formula for clarity:

$$
S(N) = \frac{1}{f + \frac{1-f}{N}} =
\frac{N}{Nf + (1-f)}
$$

Two key observations:

  1. For small $N$, the term $\frac{1-f}{N}$ is still large, so adding processors significantly reduces runtime.
  2. For large $N$, the serial term $f$ dominates, and the $\frac{1-f}{N}$ term becomes negligible. Extra processors mostly sit idle during the serial sections.

In practice, this means:

Practical Use in HPC Planning

Amdahl’s Law is often used to:

1. Estimate Realistic Speedup

Given an estimated serial fraction $f$ and planned processor count $N$, you can estimate your expected speedup:

This helps manage expectations and justify resource requests.

2. Decide Where to Optimize

For maximizing speedup, reducing $f$ is often more impactful than further parallelizing already parallel code.

For example:

Conclusion: Parallelizing or refactoring the serial bottlenecks can be more beneficial than throwing more hardware at the problem.

3. Choosing a Core Count

Using the law, you can find a rough “sweet spot” where additional cores give only marginal speedup.

For instance, you might stop scaling when each doubling of cores yields less than, say, 10% additional speedup. You can compute:

If the relative gain is small, asking for more cores may not be worth the queue time and resource usage.

Extensions and Limitations

Amdahl’s Law provides a best-case picture under simplifying assumptions:

In reality:

Nevertheless, the law is still extremely useful as:

(Weak scaling and more optimistic scaling under growing problem sizes are treated using Gustafson’s Law in a separate chapter.)

Key Takeaways

Views: 14

Comments

Please login to add a comment.

Don't have an account? Register now!