Next: Mean, Variance, and Standard
Up: Probability Theory
Previous: What is Probability?
Consider two distinct possible outcomes, and ,
of an observation made on the system , with probabilities of
occurrence and
, respectively. Let us determine the probability of
obtaining the outcome or , which we shall denote .
From the basic definition of probability,
|
(2) |
where
is the number of systems in the ensemble which exhibit
either the outcome or the outcome . Now,
|
(3) |
if the outcomes and are mutually exclusive (which must be the case
if they are two distinct outcomes). Thus,
|
(4) |
So, the probability of the outcome or the outcome is just the
sum
of the individual probabilities of and . For instance, with a
six-sided die the probability of throwing any particular number (one to six) is
, because all of the possible outcomes are considered to be equally
likely. It follows, from what has just been said, that the probability of
throwing either a one or a two is simply , which equals .
Let us denote all of the , say, possible outcomes of an observation
made on the system by
, where runs from to . Let us
determine the probability of obtaining
any of these outcomes. This quantity is unity,
from the basic definition of probability, because each
of the systems in the ensemble must
exhibit one of the possible outcomes. But, this quantity is also equal to
the sum of the probabilities of all the individual outcomes, by (4),
so we conclude that
this sum is equal to unity: i.e.,
|
(5) |
The above expression is called the normalization condition, and must be satisfied by
any complete set of probabilities. This condition is equivalent to the
self-evident statement that an observation of a system must definitely
result in one of its possible outcomes.
There is another way in which we can combine probabilities. Suppose
that we
make an observation on a system picked at random from the ensemble, and then
pick a second system completely independently and
make another observation. We are assuming here that the first
observation does not influence the second observation in
any way. The fancy mathematical way of saying this is that the two
observations are statistically independent.
Let us determine the probability of obtaining
the outcome in the first system and
the outcome in the second system, which we shall denote
.
In order to determine this probability, we have to form an ensemble of all
of the possible pairs of systems which we could choose from the ensemble
. Let us denote this ensemble
.
The number of pairs of systems in this new
ensemble is just the
square of the number of systems in the original ensemble, so
|
(6) |
Furthermore, the number of pairs of systems
in the ensemble
which exhibit the
outcome in the first system and in the second system
is simply the
product of the number of systems which exhibit the outcome
and the number of systems which exhibit the outcome in the original
ensemble, so that
|
(7) |
It follows from the basic definition of probability that
|
(8) |
Thus, the probability of obtaining the outcomes and
in two statistically independent
observations is the product of the individual probabilities of
and . For instance, the probability of throwing a one and then a two
on a six-sided die is
, which equals .
Next: Mean, Variance, and Standard
Up: Probability Theory
Previous: What is Probability?
Richard Fitzpatrick
2010-07-20