Discrete probability deals with scenarios where the set of possible outcomes of an experiment is countable. This means the outcomes can be listed out, even if the list is infinitely long. It contrasts with continuous probability, where outcomes are part of a continuum.
A random variable is a variable whose value is subject to variations due to chance. In discrete probability, we deal with discrete random variables, which take on countable values. Examples include the number of heads in a series of coin flips, the roll of a die, or the number of defective items in a batch.
A discrete probability distribution lists each possible value the random variable can take, along with its probability. For a discrete random variable \(X\), the probability distribution is denoted as \(P(X = x)\), which gives the probability that \(X\) takes the value \(x\).
A random variable \(X\) follows a Bernoulli distribution if it has only two possible outcomes: 1 (success) with probability \(p\) and 0 (failure) with probability \(1 - p\).
Probability mass function: \[ P(X = x) = \begin{cases} p & \text{if } x = 1 \\ 1 - p & \text{if } x = 0 \end{cases} \]
A random variable \(X\) follows a binomial distribution if it represents the number of successes in \(n\) independent Bernoulli trials with success probability \(p\).
Probability mass function: \[ P(X = k) = \binom{n}{k} p^k (1 - p)^{n - k} \quad \text{for } k = 0, 1, 2, \ldots, n \] Where \(\binom{n}{k}\) is the binomial coefficient, representing the number of ways to choose \(k\) successes out of \(n\) trials.
A random variable \(X\) follows a geometric distribution if it represents the number of trials needed to get the first success in a series of independent Bernoulli trials with success probability \(p\).
Probability mass function: \[ P(X = k) = (1 - p)^{k - 1} p \quad \text{for } k = 1, 2, 3, \ldots \]
A random variable \(X\) follows a Poisson distribution if it represents the number of events occurring in a fixed interval of time or space, with events occurring independently and at a constant rate \(\lambda\).
Probability mass function: \[ P(X = k) = \frac{\lambda^k e^{-\lambda}}{k!} \quad \text{for } k = 0, 1, 2, \ldots \]
The expected value (or mean) of a discrete random variable \(X\) with probability distribution \(P(X = x)\) is: \[ E(X) = \sum_{x} x P(X = x) \]
The variance of \(X\) is: \[ \text{Var}(X) = \sum_{x} (x - E(X))^2 P(X = x) \]
Alternatively, variance can be calculated using: \[ \text{Var}(X) = E(X^2) - [E(X)]^2 \]
A discrete probability distribution can be represented either in the form of a table or with the help of a graph. To find a discrete probability distribution, the probability mass function is required. In other words, to construct a discrete probability distribution, all the values of the discrete random variable and the probabilities associated with them are required. Suppose a fair coin is tossed twice. Say, the discrete probability distribution has to be determined for the number of heads that are observed. The steps are as follows:
x | 0 {TT} | 1 {HT, TH} | 2 {HH} |
---|---|---|---|
P(X = x) | 1 / 4 = 0.25 | 2 / 4 = 0.5 | 1 / 4 = 0.25 |
Discrete probability is used in various fields such as finance, insurance, quality control, computer science, and many areas of science and engineering. For example:
Understanding discrete probability distributions is fundamental to the study of statistics and probability theory, providing the basis for more complex models and analyses.