Table of Contents
Understanding Probability
Probability is a way to describe and measure uncertainty. It answers questions like:
- How likely is it to rain tomorrow?
- What are the chances of rolling a 6 on a die?
- How probable is it that a randomly chosen person has a certain characteristic?
In this chapter, we build the basic ideas that all later probability topics depend on.
Outcomes and Experiments
A probability situation begins with an experiment: an action whose result is uncertain but well-defined.
Examples of experiments:
- Tossing a coin once.
- Rolling a six-sided die.
- Drawing one card from a standard deck of 52 cards.
- Choosing a person at random from a group.
- Measuring the time (in seconds) until a light changes from red to green.
The possible results of an experiment are called outcomes.
- Toss a coin: outcomes are “Heads” or “Tails”.
- Roll a die: outcomes are $1,2,3,4,5,6$.
- Draw a card: outcomes are the 52 specific cards.
Sample Space and Events (Informal View)
The collection of all possible outcomes of an experiment is called the sample space.
- For a coin toss, a convenient sample space is
$$S = \{\text{Heads}, \text{Tails}\}.$$ - For one roll of a fair six-sided die,
$$S = \{1,2,3,4,5,6\}.$$
An event is a collection (set) of outcomes that we care about. We say “the event occurs” if the outcome of the experiment is one of the outcomes in that set.
Examples for a die roll:
- Event “roll an even number”:
$$E = \{2,4,6\}.$$ - Event “roll a number greater than 3”:
$$G = \{4,5,6\}.$$ - Event “roll a 1”:
$$A = \{1\}.$$
Events can contain one outcome, several outcomes, all outcomes, or no outcomes at all.
Basic Probability of an Event
To assign a probability, we associate each event with a number between $0$ and $1$ (inclusive):
- $0$ means “impossible”: the event cannot happen.
- $1$ means “certain”: the event will definitely happen.
- Numbers between $0$ and $1$ measure intermediate likelihoods.
Often, especially with simple games of chance, we start with the idea of equally likely outcomes.
If all outcomes in the sample space $S$ are equally likely, and an event $E$ contains some of these outcomes, then the probability of $E$ is
$$
P(E) = \frac{\text{number of outcomes in } E}{\text{number of outcomes in } S}.
$$
This is the fundamental counting-based probability formula for equally likely cases.
Examples with Equally Likely Outcomes
- Rolling a fair six-sided die
Sample space:
$$S = \{1,2,3,4,5,6\}.$$
All 6 outcomes are equally likely.
- Event: “roll a 4”.
$E = \{4\}$ has 1 outcome, so
$$
P(\text{roll a 4}) = \frac{1}{6}.
$$
- Event: “roll an even number”.
$E = \{2,4,6\}$ has 3 outcomes, so
$$
P(\text{even}) = \frac{3}{6} = \frac{1}{2}.
$$
- Drawing a card from a standard 52-card deck
Suppose we consider the event “draw a heart”.
There are 13 hearts in the deck, and 52 total cards:
$$
P(\text{heart}) = \frac{13}{52} = \frac{1}{4}.
$$
For “draw an Ace”, there are 4 Aces:
$$
P(\text{Ace}) = \frac{4}{52} = \frac{1}{13}.
$$
- Tossing a fair coin once
Sample space:
$$S = \{\text{Heads}, \text{Tails}\}.$$
Each outcome is equally likely.
- Event: “Heads”.
$P(\text{Heads}) = \frac{1}{2}$.
- Event: “Tails”.
$P(\text{Tails}) = \frac{1}{2}$.
Probabilities as Fractions, Decimals, and Percentages
Probabilities can be expressed in several equivalent ways:
- As a fraction: $\frac{1}{4}$
- As a decimal: $0.25$
- As a percentage: $25\%$
All three mean the same probability.
Conversions:
- From fraction to percentage: multiply by $100\%$.
Example: $\frac{3}{5} = 0.6 = 60\%$.
- From percentage to decimal: divide by 100.
Example: $75\% = 0.75$.
In many real-world contexts (like weather forecasts or surveys), probabilities are commonly reported as percentages.
Certain and Impossible Events
Two special extreme cases:
- An impossible event has probability $0$.
Example: Rolling a $7$ on a standard six-sided die:
$$
P(\text{roll a 7}) = 0.
$$
- A certain event has probability $1$.
Example: When you roll a six-sided die, getting a number from 1 to 6 is certain:
$$
P(\text{number from 1 to 6}) = 1.
$$
These extremes give natural boundaries for any probability:
$$
0 \le P(E) \le 1.
$$
No probability can be negative or greater than $1$.
Complement of an Event
Many probability questions are easier if we look at what does not happen.
For any event $E$, the complement of $E$ (often written $E^\text{c}$ or “not $E$”) is the event that $E$ does not occur. It consists of all outcomes in the sample space that are not in $E$.
Key relationship:
$$
P(E) + P(\text{not }E) = 1.
$$
Equivalently,
$$
P(\text{not }E) = 1 - P(E).
$$
Examples:
- Rolling a die:
- Let $E$ be the event “roll an even number” ($\{2,4,6\}$).
- Then “not $E$” is “roll an odd number” ($\{1,3,5\}$).
We know
$$
P(\text{even}) = \frac{3}{6} = \frac{1}{2},
$$
so
$$
P(\text{odd}) = 1 - \frac{1}{2} = \frac{1}{2}.
$$
- Weather forecast:
- If $P(\text{rain tomorrow}) = 0.3$, then
$$
P(\text{no rain tomorrow}) = 1 - 0.3 = 0.7.
$$
Using complements is a powerful basic tool, and it becomes particularly helpful in more complicated situations.
Relative Frequency and Experimental Probability
The formula based on counting assumes that all outcomes are equally likely and that we know exactly how many there are. In the real world, we often do not know true probabilities ahead of time.
Instead, we can estimate probabilities by repeating an experiment many times and recording how often an event occurs. This leads to the idea of relative frequency.
If an experiment is repeated $n$ times and an event $E$ happens $k$ times, the experimental (or empirical) probability of $E$ is
$$
P_\text{exp}(E) = \frac{k}{n}.
$$
This is also called the relative frequency of $E$.
Examples:
- If you toss a coin $100$ times and get Heads $47$ times, the experimental probability of Heads is
$$
P_\text{exp}(\text{Heads}) = \frac{47}{100} = 0.47 = 47\%.
$$ - If $200$ people are asked whether they like a new product, and $150$ say “yes”, then
$$
P_\text{exp}(\text{yes}) = \frac{150}{200} = 0.75 = 75\%.
$$
As the number of trials becomes large, experimental probabilities often get closer to the “true” underlying probability (when one exists). This idea is deepened later in probability and statistics, but for now, you should be comfortable seeing probability both as:
- a theoretical value computed from a model, and
- an empirical value estimated from data.
Simple Probability Rules
At this stage, it is useful to note a few basic rules that follow directly from the ideas above:
- Every event $E$ satisfies $0 \le P(E) \le 1$.
- The probability of the entire sample space is $1$:
$$
P(S) = 1.
$$ - The probability of the empty event (no outcomes at all) is $0$:
$$
P(\emptyset) = 0.
$$ - For any event $E$,
$$
P(\text{not }E) = 1 - P(E).
$$
More detailed rules about combining events using logical ideas like “and” and “or” are developed in later chapters, building on the basics introduced here.
Interpreting Probability in Everyday Contexts
Finally, basic probability appears constantly in daily life in different forms:
- “There is a $20\%$ chance of rain.”
- “This medicine has a $5\%$ risk of a side effect.”
- “The survey suggests that $40\%$ of people prefer option A.”
Each of these statements is expressing a probability. To interpret them, you can think in terms of:
- Frequencies: “Out of 100 similar days, it might rain on about 20.”
- Long-run behavior: “If we repeated this situation many times, this outcome would occur about that fraction of the time.”
Being comfortable with basic probability ideas helps you make more informed decisions whenever uncertainty is involved.