Logistic regression modelling using R

What is Logistic Regression?
Logistic Regression is a classification algorithm. It is used to predict a binary outcome (1 / 0, Yes / No, True / False) given a set of independent variables. To represent the binary/categorical outcome, we use dummy variables.

You can also think of logistic regression as a special case of linear regression when the outcome variable is categorical, where we are using log of odds as the dependent variable. In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function. You can see the derivation equation of generalized linear model click here.

In this tutorial, we will learn how to make a glm model using R. So let’s start building a logistic model.
We will generate our own data in R-console as stated below

The function gl()(‘generates levels’) is useful when you want to encode long vectors of factor level. The syntax for three arguments is “gl(2 2 24 ) :- means ‘up to’, ‘with repeat of’, ‘total length'”.

Here in the above-assumed dataset, we have found out whether

now check the “deter” variable

logistics Now we have our complete dataset.

logistics1
Number of Fisher Scoring iterations: 4

logistic2
Above we can see that two deviances NULL and Residual. Here Value of NULL deviance can be read as 118.627 on 23 degrees of freedom and Residual deviance as 5.656 on 8 degrees of freedom. Deviance is a measure of goodness of fit of a model. Higher numbers always indicate bad fit.

The null deviance shows how well the response variable is predicted by a model that includes only the intercept (grand mean) whereas residual with the inclusion of independent variables.

Above, you can see that addition of 15 (23-8 =15) independent variables decreased the deviance to 118.627 from 5.656, a significant reduction in deviance. The Residual Deviance has reduced by 112.971 with a loss of fifteen degrees of freedom.

The degree of freedom: degree of freedom implies the how many independent random variables you have.
If your Null Deviance is really small, it means that the Null Model explains the data pretty well. Likewise with your Residual Deviance.

 

logistic3
check the correlation between the variables of deter.model1
logistic4

logistic5

Fisher Scoring

What about the Fisher scoring algorithm? Fisher’s scoring algorithm is a derivative of Newton’s method for solving maximum likelihood problems numerically.

For “deter.model and deter.model1” we see that Fisher’s Scoring Algorithm needed four iterations to perform the fit. This doesn’t really tell you a lot that you need to know, other than the fact that the model did indeed converge, and had no trouble doing it.

Information Criteria

The Akaike Information Criterion (AIC) provides a method for assessing the quality of your model through comparison of related models.  It’s based on the Deviance but penalizes you for making the model more complicated.  Much like adjusted R-squared, it’s intent is to prevent you from including irrelevant predictors.
However, unlike adjusted R-squared, the number itself is not meaningful. If you have more than one similar candidate models (where all of the variables of the simpler model occur in the more complex models), then you should select the model that has the smallest AIC.

So it’s useful for comparing models but isn’t interpretable on its own.
source: click here

You might also like More from author