2.1 What is probability?
We start with a classical example: tossing a coin. If you have one, take it in your hands, look at it, and answer a question: what outcome will you have if you toss it? Toss it once and, let’s say, it ended up showing heads. Can you predict the outcome of the next toss based on this observation? What if you toss it again and end up with tails? Would that change your prediction for the next toss?
What we could do in this situation to predict future outcomes is to write down the results of tosses as zeroes (for heads) and ones (for tails). We will then have a set of observations of a style:
1 0 1 0 0 1 1 0 1 1
If we then take the mean of this series, we will see that the expected outcome based on our sample is 0.6. We would call this value the empirical probability. It shows us that roughly in 50% of the cases in our sample we get tails. But this is based on just 10 experiments. If we continue tossing the coin for many more times, this probability (in the case of a fair coin) will eventually converge to 0.5, meaning that in the 50% of the cases the coin will show heads and in the other 50% it will be tails. In fact, we know that there are only two possible outcomes in this experiment, and that in case of a fair coin, there are no specific forces that could change the outcome and lead to more tails than heads. In this case, we can say that the theoretical probability of having tails is 0.5. Note that this does not tell us anything about each specific outcome, but only demonstrates what happens on average, when we repeat the experiment many times.
Definition 2.1 Probability is the measure of how likely an event is expected to occur if we observe it many times.
This definition implies that we cannot tell what the next outcome of the experiment will be (whether the coin toss will result in heads or tails). Instead, we can say what will happen on average if the experiment is repeated many times. By definition, the probability lies between 0 and 1, where 0 means that the event will not occur and 1 implies that it will always occur.
We could do other similar experiments, for example rolling a six-sided dice, and calculating the probability of a specific outcome. In the simple cases with coins, cards, dices etc, we can even tell the probability without running the experiments. All we need to do is calculate the number of outcomes of interests and divide them by all the possible outcomes. For example, the probability of getting 3 on a 6-sided dice is \(\frac{1}{6}\), because there are overall six outcomes: 1, 2, 3, 4, 5 and 6, and the probability of getting any one of them is the same for all of them. The probability of getting 5 is \(\frac{1}{6}\) as well for the same reason: all the six outcomes are considered equally possible and will happen on average every sixth roll.
Remark. In some tabletop games, the number of dices and their outcomes are encoded as \(a \mathrm{d} b\), where \(a\) is the number of dices, \(b\) is the number of sides and d stands for the word “dice”. In our example, the 10-sided dice can be encoded as 1d10, while the classical 6-sided one is 1d6.
Mathematically, we will denote probability as \(\mathrm{P}(y)\), where \(y\) represents a specific outcome. We can write, for example, that the probability of having 3 in the dice roll experiment is: \[\begin{equation} \mathrm{P}(y=3) = \frac{1}{6} . \tag{2.1} \end{equation}\] We can calculate more complicated probabilities. For example, what is the probability of having an odd number when rolling a 1d6? We need to calculate the number of events of interest and divide that number by the number of possible outcomes. In our case, the former is 1, 3, and 5 (three numbers), while the latter is any integer number from 1 to 6 (six numbers). This means that: \[\begin{equation} \mathrm{P}(y \text{ is odd}) = \frac{3}{6} = \frac{1}{2}. \tag{2.2} \end{equation}\]