Random Variable and Distribution: The Concept

A random variable is a variable whose value is unknown and uses probabilities to decide its value. For example, if we roll a fair die, we know that each of the values 1, 2, 3, 4, 5 or 6 occurs with a probability of 1/6. Each of the values can occur randomly.

In this article, first, we’ll discuss the properties of a random variable, then types of random variables along with their probability functions, in brief.

So, the properties of a random variable are:

  • It only takes the real value.
  • If X is a random variable and C is a constant, then CX is also a random variable.
  • If X1 and X2 are two random variables, then X1 + X2 and X1X2 are also random.
  • For any constants C1 and C2, C1X1 + C2X2 is also random.
  • |X| is a random variable.

Note: We use a capital letter, example X, to stand for the random variable and its equivalent lower case, example x, to stand for a value that it takes.

Now, there are two types of random variables.

  1. Discrete Random Variable
  2. Continuous Random Variable

Let’s discuss about them in brief.

Discrete Random Variable
A random variable that can only take certain numerical values (i.e. discrete values) is called a discrete random variable. For example, the number of applicants for a job or the number of accident-free days in one month at a factory.

The function fx(x) = P(X=x) for each x in the range of X is the probability function (PF) of X – it specifies how the total probability of 1 is divided up amongst the possible values of X and so gives the probability distribution of X. Probability functions are also known as ‘pdf’.

Note the requirements for a function to qualify as the probability function of a discrete random variable, for all x within the range of X:

Other than the probability function, the cumulative distribution function (CDF) of X is also very important. It is given by:

for all real values of X, gives the probability that X assumes a value that doesn’t exceed x.

The graph of Fx(x) against x starts at a height of 0 then increases by jumps as values of x are reached for which P(X=x) is positive. Once all possible values are included Fx(x) takes its maximum value of 1. Fx(x) is called a step function.

Let’s understand the above concepts more clearly with the help of an example. Suppose we roll a fair die, then it’s probability distribution would be:

From this table, it can be shown that the sum of all the probabilities is 1. Also, technically we should write the CDF as:

Now, let’s understand what continuous random variable is.

Continuous Random Variable
A random variable that can take any numerical value within a given range is called a continuous random variable. For example, the temperature of a cup of coffee served at a restaurant or the weight of refuse on a truck arriving at a landfill.

The probability associated with an interval of values, (a, b) say, is represented as P(a<x<b) or P(a ≤x ≤b) – these have the same values – and is the area under the curve of the probability density function (pdf) from a to b. So probabilities can be evaluated by integrating the pdf, fx(x). Thus,

The conditions for a function to serve as pdf are as follows:


for -∞≤x≤∞.
You should have noticed that these conditions are equivalent to those of the probability function for a discrete random variable, where the summation is replaced by integration for the continuous case.

The cumulative distribution function (CDF) is defined to be the function:

For a continuous random variable, Fx(x) is a continuous, non-decreasing function, defined for all real values of x.

The graph of F(x) can be shown as:

Now we can work out the CDF from pdf. But how can we get back again?

Well, we integrate the pdf, f(x), to get the CDF, F(x), so it makes sense that to go back we differentiate.

We can obtain the pdf, f(x), from the CDF, F(x), as follows:

Like a discrete random variable, a continuous random variable can also be understood more clearly with the help of an example.

Suppose we have the following probability density function:

It can be seen that at x=1, fX(x)=3/7 and at x=2, fX(x)=12/7. Therefore, the given pdf is greater than 0 in the interval [1, 2]. Also, if we integrate the pdf, then we get,

Now, cumulative distribution function can be found as:

Hence, we should write the CDF as:

Now, we should know how to obtain expected values for both, discrete as well as continuous random variables. 

Expected values are numerical summaries of important characteristics of the distributions of random variables. So, let’s see how mean and standard deviation of a random variable are obtained.

Mean
E[X] is a measure of the average/center/ location of the distribution of X. It is called the mean of the distribution of X, or just the mean of X, and is usually denoted by μ.

E[X] is calculated by summing (discrete case) or integrating (continuous case) the product:

value x probability of assuming that value

over all values which X can assume.

Thus, for the discrete case:

and, for the continuous case:

Variance and Standard Deviation
The variance, σ2 is a measure of the spread/ dispersion/ variability of the distribution. Specifically, it is a measure of the spread of the distribution about its mean.

Formally,  

is the expected value (or mean) of the squared deviation of X from its mean. The standard deviation, σ, is the positive square root of this – hence the term sometimes used “root mean squared deviation”.

Simplifying:

If we take our above example of a discrete random variable, it can be shown how mean and variance are calculated. Let’s calculate these values to understand the calculation of expected values more clearly.

∴ Mean = E[X] = 21/6 = 7/2
and Variance = E[X2] – {E[X]}2 = 91/6 – (7/2)2 = 35/12

Now, there are some linear functions of X also. Consider changing the origin and the scale of X.

Let Y = aX + b. Let E[X] = μ.

E[Y] = E[aX + b] = aμ + b

So Y – E[Y] = aX + b – [aμ + b] = a[X – μ].

These are important results. The results for the expected value can be thought of simply as “whatever you multiply the random variable by or add to it, you do the same to the mean”. However, the addition of a constant to a random variable does not alter the variance.

This should make sense since the variance is a measure of spread and the spread is not altered when the same constant is added to all values. When you multiply the random variable by a constant you multiply the standard deviation by the same value, so the variance is multiplied by that constant squared.

Now, let’s see what a probability distribution is. The probability distribution for a random variable describes how the probabilities are distributed over the values of the random variable. There are two types of probability distributions, discrete and continuous. For a discrete random variable, say X, the probability distribution is defined by a probability mass function.

Whereas for a continuous random variable, say Y, the probability distribution is defined by a probability density function, as discussed above in discrete and continuous random variable heading. There are some special probability distributions.

Two of the most widely used discrete probability distributions are the binomial and Poisson distribution.

First is Binomial Distribution. The probability mass function of binomial distribution provides the probability that ‘x’ successes will occur in ‘n’ trials of a binomial experiment. If X ~ Bin(n, p), then

for x = 0, 1, 2, …, n and 0<p<1.

Here, there are two outcomes, success or failure, which are possible on each trial and ‘p’ denotes success on any trial. The trials are independent.

Suppose, X ~ Bin(5, 0.7), then

Second is Poisson Distribution. The Poisson distribution is often used as a model of the number of arrivals at a facility within a given period of time. If X ~ Poi(μ), then

for x = 0, 1, 2, … and μ>0.
Here, the parameter ‘μ’ is the mean of a random variable ‘x’.

Suppose, the mean number of calls arriving in a 15 minutes period is 10. Then, to compute the probability that 5 calls arrive within the next 15 minutes period, we have μ = 10 and x = 5, then

The most widely used continuous probability distribution is the normal distribution. The graph of the normal distribution is a bell-shaped curve. The probabilities for the normal distribution can be computed using statistical tables for the standard normal distribution, which has a mean of 0 and a standard deviation of 1.

The normal distribution depends on two parameters, μ, and σ2, which is mean and variance of a random variable. The probability density function of the normal distribution, if X ~ N(μ, σ2), is given by:

for -∞<x<∞.

Suppose X ~ N(μ, σ2), then to convert the random variable X from normal probability distribution to standard normal distribution, we have:

Z is another random variable, which follows a standard normal distribution with mean zero and variance 1.

Suppose X ~ N(50, 32), then

There are many other discrete and continuous probability distributions, other than the three discussed above. Other discrete probability distributions include uniform distribution, Bernoulli distribution, geometric distribution, hypergeometric distribution, and negative binomial distribution; other commonly used continuous probability distributions include uniform, gamma, exponential, chi-square, beta, log-normal, t and F distribution.

So, to summarise, a random variable is a set of possible values from a random experiment which uses probabilities to decide its value. Then, there are two types of random variables, discrete and continuous; and both have two basic requirements, i.e., the value of probability function should lie between 0 and 1 and sum of all the values of probability should be 1. Next, we have seen how to obtain mean and variance of a random variable followed by some important discrete and continuous probability distributions.

I hope that I am able to tell you about the concept of a random variable in brief.

You might also like More from author