# Classical Normal Linear Regression Model (CNLRM)

### Basics Econometrics: Problem Of Estimation

In this article, we will discuss the details of the Classical Normal Linear Regression Model (CNLRM). The method of ordinary least squares is attributed to Carl Friedrich Gauss, a German mathematician. Under certain Assumption, this method of estimation has some very attractive statistical properties that made it one of most powerful and popular method of regression analysis.

The two variable Population Regression Function: However, the Population functions can’t be obtained directly, hence we estimated them from the help of sample regression functions: Where 𝑌 ̂𝑖 is the estimated (conditional mean) value of 𝑌𝑖

### Estimation Of Coefficients Using OLS Estimation

The OLS Estimated of 𝛽1 and 𝛽2 can be obtained as follows: On differentiating partially with respect to 𝛽1 and 𝛽2 we obtained the following results: and Thus we get the estimated of the population regression function as: Where, ### The Classical Linear Regression Model

The Assumption Underlying The Method Of Least Square.
The objective of estimating 𝛽1 and 𝛽2 only, the method of OLS discussed is suffice, but if the objective is to draw the inference about the true value of population variables 𝛽1and 𝛽2 then we have to look upon the fictional form of 𝑌𝑖′𝑠 or the functional form of 𝑋𝑖′𝑠 and 𝑢𝑖′𝑠. This is because the value of population regression function i.e., 𝑌 𝑖 = 𝛽1 + 𝛽2𝑋𝑖 + 𝑢𝑖 depends on Xi and error terms.

Therefore, unless we are specified about how 𝑋𝑖 and 𝑢𝑖 are created or generated, there is no way we can make any statistical inference about 𝑌𝑖 and also, as we shall see, about 𝛽1and 𝛽2. Thus, we need some assumptions made about the 𝑋𝑖 variables and error terms are extremely critical to valid interpretation of the regression estimates.

The Gaussian, standard, or classical linear regression model (CLRM), which is the cornerstone of almost every economic theory, makes the following 7 assumptions:

Assumption1: The regression model is linear in terms of parameters.
Assumption2: The values of 𝑋𝑖′𝑠 are fixed or 𝑋 values are independent of the error term.
Assumption3: the mean of error terms is zero.
Assumption4: the variance of error terms is constant; this assumption is also known as Homoscedasticity.
Assumption5: the is no autocorrelation between the error terms (or disturbances)
Assumption6: the number of observations is greater than the number of parameters to be estimated. Assumption7: The 𝑋 values in the given sample must not be the same. i.e., the variance of 𝑋𝑖′𝑠 is positive.

NOTE: Gauss-Markov Theorem:
Given the assumption of the Classical linear regression model, the least-square estimators, in the class of unbiased linear estimators, have minimum variance, that is, they are BLUE

#### The Classical Normal Linear Regression Model (CNLRM):

Using the method of OLS we are able to estimate the population parameters 𝛽1 and 𝛽2, under the assumptions of the classical linear regression model, as 𝛽 ̂1 and𝛽 ̂2.But, since these estimators differ from sample to sample. therefore, these estimators as random variables.

Hence, we called the estimators as random variable thus we have to find the probability distribution of these estimators.

The Probability Distribution Of Disturbances (𝒖𝒊′𝒔) But since 𝑋𝑖′𝑠 are assumed fixed, or nonstochastic, because ours is conditional regression analysis, conditional on the fixed values of 𝑋𝑖.

also 𝑌 𝑖 = 𝛽1 + 𝛽2𝑋𝑖 + 𝑢𝑖
Hence the 𝛽 ̂2 can be rewritten the 𝑘𝑖, the beta and 𝑘𝑖 are fixed hence the estimate 𝛽 ̂2 is ultimately a linear function of the random variable.
Therefore, the probability distribution of estimators depends on the assumption of error terms. Since we need the probability distribution of estimated to draw the inference about population parameters, we have to draw the assumption about the distribution of error term.

Since the OLS does not make any assumption about the probabilistic nature of 𝑢𝑖, it is of little help for the purpose of drawing of drawing inference about population regression function from the sample regression function, the Gauss-Markov Theorem notwithstanding.

This void can be filled if we are willing to assume that the 𝑢𝑖′𝑠 follow some probability distribution. For reasons to be explained shortly, in the regression context it is usually assumed that the 𝑢𝑖′𝑠 follow a normal distribution.

Thus, adding the normality assumption of the classical linear regression model (CLRM) discussed earlier, we obtained what is known as the classical normal linear regression model (CNLRM).