# Naive Bayes Classifier and Its Application Using R

Naive Bayes or Naive Bayes Classifier has its foundation pillar from the concept of Bayes theorem explained by the theory of probability. Probability is the chance of an event occurring. Probability can be related to our regular life and it helps us to solve a lot of real-life issues.

There is a probability for a single event, calculated as the proportion of cases where that particular event happens. Similarly, we have a probability of a group of events, calculated as the proportion of cases where the group of events occur together.

Another one is that if it is known that one event has already happened, what will be the probability that another event happens after that. Example, if A is the first event and B is the second event, then P(B|A) is the probability of event A taking place after the occurrence of event B.

The equation goes like,

P(B | A) = P(B) * P(A | B) / P(A)

where,

A is the first event
B is the second event
P(B|A) is the probability of event A taking place after the occurrence of event B
P(A|B) is the probability of event B taking place after the occurrence of event A
P(A) is the probability of event A taking place
P(B) is the probability of event B taking place

This concept is called the Bayes Theorem in probability. Naive Bayes Classifier depends on this concept to explain its theory.

The algorithm makes an assumption that the data has attributes independent of each other. But in reality, they may be dependent in some way. If this assumption of independence holds, Naive Bayes performs better than other models. If all the attributes of the data are categorical, Naive Bayes works very well, though it can also be used with continuous attributes. However, in the case of numeric attributes, it makes another assumption that the numerical variable is normally distributed.

### Example

In our example case, we will work on a data having 9134 records of customers. The attributes about each customers provided are “Customer ID“, “State“, “Education“, “Employment Status“, “Gender“, “Location“, “Marital Status“, “Vehicle” and “Income“.

The objective of building our model is to predict the income level of customers. The income is divided into two levels, high and low. The assumption being that the customers having income below 35000 is considered as the low-income customer, and those having income more than 35000 are high-income customers.

The steps to be followed for the model building :

1. Import the data.
2. Data cleaning is an important part.
3. Creating a derived column with respect to the income column. The new column indicates only the income levels (high or low), based on the assumption made above.
4. Divide the data in 7:3 ratio. First part is training data that will be used to make the machine learn the data trend. The second part is to predict their income levels.
5. Then comes the step to see the predictions made by the model and check how accurate these predictions are.

So as explained above we start with our model building from the first step onwards.

### Data Cleaning

As mentioned above, our target is to predict the income levels of customers. So we create a column stating the income levels, i.e., high and low, according to the income mentioned.
Let us set that if a customer has income more than 35000, then we keep him in the “high” slot, otherwise we set him “low”.

Now, we remove the 9th variable (Income) as we are itself taking the income levels as a calculated field.

Few variables that are irrelevant with regards to this model should be removed. These may include, “Customer”, “Gender” and “Marital Status”. These variables should have no direct connections on determining the income of the customers.

Checking the structure of the variables,

We see there are 9134 records and 6 variables structured in a data frame. Only the problem is that the variable “inc” in the char data type, which is a problem. As there are only two levels in this variable, high and low, hence we have to convert it into factor data type.

Now, finally, the data looks good.

### Naive Bayes Classifier Model

Installing the libraries,

Now dividing the dataset into training and testing set, keeping the ratio as 7:3,

Running the naive Bayes function. Keeping “inc” as the dependent variable and considering all other 5 variables as independent variables (indicated with “.” sign). Running the model on the training set first.

On running “data_nb” we get to see the summary of the model run. We read it as,

Under the heading “A-priori probabilities”, we see that there is 49% chance of income of the testing dataset customers being low. Similarly 51% chance of income of the testing dataset customers being high.

Under the heading “Conditional probabilities”, we get the conditional probabilities of all the variables individually.
If the State is “Arizona”, the probability of the income being high is more than the probability of the income being low. Similarly, if the state is “California”, the probability of the income being low is more than the probability of the income is high. We read the rest in this manner.

Next, if the Education is “Bachelor”, the probability of the income being low is more than the probability of the income is high. Compared to “Master”, the probability of the income being high is much more than the probability of the income is low, which is logical.

We can read the other observations in the same way.

Now running the model on the test data and getting the predictions,

The variable “pred_nb” stores the high and low levels corresponding to all the records. To read it properly let’s create a confusion matrix out of it,

The matrix shows the very good result.

### Validation Observations

1. The diagonal values are the number of correct predictions and the off-diagonals are considered a number of wrong predictions. So we see that there are much lower wrong predictions (352 + 0)
as compared to the correct predictions (1382 + 1047).
2. Accuracy per cent is much high (87%) which is a good indication.
3. P-value is much lower than 0.05 (<2.2e-16), which is desired.
4. Kappa statistic is also high (around 75%). This indicates that there is a huge difference between the actual accuracy and the random accuracy.
5. Sensitivity and Specificity are also close to each other.

Hence with all these observations, we can say it is a good model.

### Insights

1. For State, customers living in “Arizona”, “Nevada”, “Oregon”, “Washington” has probabilities of income being high than being low. Those living in “California” has the opposite result. But we see that the variance among these two levels for all the States is less

2. For Education, customers who hold the degrees of “Bachelor”, “Doctor” and “High School or Below” has probabilities of income being low than being high. But those holding “Master” shows the opposite insight. Also, “College” people have an income level of a standard. Not high not even low, which is meaningful.

3. For Employment Status, customers who are “Disabled”, “Retired”, “Unemployed” and also who is on “Medical Leave”, have low income, and their probability of getting a high income is exactly 0. Extremely opposite is the case of “Employed” customers. Their probability to get high income is perfectly 1 and that of getting low income is much lesser. This actually makes sense if we refer it with our real life.

4. For Location, customers living in “Rural” and “Urban” has the much much higher probability of income being high than being low. On the other hand, the ones living in “Suburban” has the opposite result. There the probability of income being low is more than being high. All of their variances are much higher.

5. For Vehicle, customers having “Four-Door Car”, “Luxury Car” and “Two-Door Car” has a higher probability of income being high than being low. Customers having “Luxury SUV”, “Sports Car” and “SUV” has just the opposite result. But we see that the variance among these two levels for all the Vehicles is less.