Effect Of Multicollinearity and VIF in R

Problems Due To Multicollinearity

We start the topic of Multicollinearity by giving a funny example by which we can realize what multicollinearity is and its effect. One day Ram’s father asked him “Ram, Why your bank balance is so much low?” Ram became afraid and stated different issues. His father slowly noted down all the issues and asked him to detect important issues which are the main reason for this low bank balance. But Ram was unable to determine the important issues behind this incident because of the relationship between all the prescribed issues.

It is the main concept of multicollinearity. Collinearity means the linear relationship and multicollinearity means the linear relationship between multiple variables. Here the low bank balance is predicted by some independent causes where the low bank balance can be described as explained variable and the causes are defined as an explanatory variable. Then there are obviously some linear relationships between the variables.

In Statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regressions may change erratically in response to small changes in the model or the data.

Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors. That is, a multivariate regression model with collinear predictors can indicate how well the entire bundle of predictors predicts the outcome variable, but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others.

Now, before going to the definition of multicollinearity we will discuss collinearity. Collinearity is a linear association between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between them

Multicollinearity refers to a situation in which two or more explanatory variables in a multiple regression model are highly linearly related. We have perfect multicollinearity if, the correlation between two independent variables is equal to 1 or −1. In practice, we rarely face perfect multicollinearity in a data set. More commonly, the issue of multicollinearity arises when there is an approximately linear relationship between two or more independent variables.

Now we discuss it mathematically let  where i=1 then the collinearity means the linear relationship between y and which is denoted by and it lies between [-1,1]. Now we think about the situation where there are k explanatory variables and the regression model now looks like Mathematically, a set of variables are multicollinear if there exists one or more exact linear relationship among some of the variables. For example, we may have

EFFECT OF MULTICOLLINEARITY

holding for all observations i=1,2,3,…,N where are constants and  be the ith observation of jth explanatory variable. The ordinary least squares estimates involve inverting the matrix where

EFFECT OF MULTICOLLINEARITY

is an N × (k+1) matrix, where N is the number of observations and k is the number of explanatory variables (with N required to be greater than or equal to k+1). If there is an exact linear relationship (perfect multicollinearity) among the independent variables, at least one of the columns of X is a linear combination of the others, and so the rank of X (and therefore of XTX) is less than k+1, and the matrix XTX will not be invertible.

Perfect multicollinearity is fairly common when working with raw data sets, which frequently contain redundant information. Once redundancies are identified and removed, however, nearly multicollinear variables often remain due to correlations inherent in the system being studied.

Effects of Multicollinearity

Now we discuss the effects of multicollinearity. If two independent variables contain essentially the same information to a large extent, one gains little by using both in the regression model. Moreover, multicollinearity leads to unstable estimates as it tends to increase the variances of regression coefficients. Now we discuss this mathematically.

One consequence of a high degree of multicollinearity is that, even if the matrix is invertible, a computer algorithm may be unsuccessful in obtaining an approximate inverse, and if it does obtain one, it may be numerically inaccurate. But even in the presence of an accurate matrix, the following consequences arise:

In the presence of multicollinearity, the estimate of one variable’s impact on the dependent variable  {displaystyle Y}Y while controlling for the others tends to be less precise than if predictors were uncorrelated with one another. The usual interpretation of a regression coefficient is that it provides an estimate of the effect of a one unit change in an independent variable holding the other variables constant.

If  is highly correlated with another independent variable , in the given data set, then we have a set of observations for which and   have a particular linear stochastic relationship. We don’t have a set of observations for which all changes in are independent of changes in  so we have an imprecise estimate of the effect of independent changes in 

In some sense, the collinear variables contain the same information about the dependent variable. If nominally “different” measures actually quantify the same phenomenon then they are redundant. Alternatively, if the variables are accorded different names and perhaps employ different numeric measurement scales but are highly correlated with each other, then they suffer from redundancy.

One of the features of multicollinearity is that the standard errors of the affected coefficients tend to be large. In that case, the test of the hypothesis that the coefficient is equal to zero may lead to a failure to reject a false null hypothesis of no effect of the explanatory, a type II error.

Another issue with multicollinearity is that small changes to the input data can lead to large changes in the model, even resulting in changes of sign of parameter estimates.

Remedies of Multicollinearity

Now we discuss the remedies of multicollinearity. The main solution is to keep only one of the two independent variables that are highly correlated with the regression model. Make sure you have not fallen into the dummy variable trap; including a dummy variable for every category (e.g., summer, autumn, winter, and spring) and including a constant term in the regression together guarantee perfect multicollinearity.

Try seeing what happens if you use independent subsets of your data for estimation and apply those estimates to the whole data set. Theoretically, you should obtain a somewhat higher variance from the smaller datasets used for estimation, but the expectation of the coefficient values should be the same. Naturally, the observed coefficient values will vary but look at how much they vary.

Leave the model as it is, despite multicollinearity. The presence of multicollinearity doesn’t affect the efficiency of extrapolating the fitted model to new data provided that the predictor variables follow the same pattern of multicollinearity in the new data as in the data on which the regression model is based.[9]

Drop one of the variables. An explanatory variable may be dropped to produce a model with significant coefficients. However, you lose information (because you’ve dropped a variable). The omission of relevant variable results in biased coefficient estimates for the remaining explanatory variables that are correlated with the dropped variable.

Obtain more data, if possible. This is the preferred solution. More data can produce more precise parameter estimates (with lower standard errors), as seen from the formula in variance inflation factor for the variance of the estimate of a regression coefficient in terms of the sample size and the degree of multicollinearity. If the VIF > 10 then the model is affected by multicollinearity.

Now we take an example by taking the real-life data based on divorce from 1920 to 1996.

And we show that the effect of multicollinearity.

Code:

Output :

 

You might also like More from author