# Extending Stochastic Models for Autonomous Vehicle Systems

How stochastic models bring intelligence thru its own elements – randomness, trials

The word “stochastic” comes from the Greek stokhazesthai, which means to aim at or guess at. A stochastic process also called a random process, in which outcomes are ambivalent. Randomness and the probability distribution of each process are basic key elements of stochastic.

For example, (according to the cancer survey) there are 50 possible metastatic locations for cancer spread in human body. By applying stochastic modeling to get the information from randomness input data, if we increase the trails, then we get conclusion on how cancer spread, which is next organ to be affected, which organ don’t have any interrupt from cancer and finally decide what kind of treatment should give to the patient.

Stochastic models will help us to solving every complex problem which has randomness by nature.

It has some following key elements to bring intelligence on its own way,

Random walks are process of taking randomness input in unpredictable way to get corresponding output, here it doesn’t take any past data to predict future.

The word “Random walks” was given by mathematician Karl Pearson (1857 – 1936) in 1905. It is very useful in algorithmic trading, and also it may have applied on air traffic collision detection and safetyBrownian motion or wiener process was discovered by biologist Robert Brown (1773 – 1858), to learn the particle movements under a microscope. But it was mathematically described by Norbert Wiener (1894 – 1964). It is a continuous-time stochastic process and a subset of the Levy Process (stochastic process with independent, stationary increments). It will generate a pattern for randomness, which means if you increase trails of sample spaces then in particular moment the randomness will be changed into deterministic one, so we will able to predict the outcomes. It widely used in finances sectors, biochemistry and AI etc.,

Poisson process is a stochastic process in which a number of events are counted in a given interval of time. The time could be inter-arrival times which should be independent of one another. The Poisson process is a continuous process. It helps us to predict the probability of certain events happening in a fixed interval of time. It is named after Siméon Denis Poisson who discovered it in 1838. For examples, it will help Autonomous Vehicles to find the number of occurrences of traffic signs in the road and make the prediction of what is next sign should be? with the help of other properties in stochastic.

Markov Chain is a stochastic process that moves from one state to another, the new state depends only on the current position and not on the historical positions. It also known as the memoryless property (Markov property) of a stochastic process. The change of positions can be represented with a transition matrix. Weiner process and Poisson process are a subset of Markov process (continuous type of markov chain). Markov chain is named after the Russian mathematician Andrey Markov (1856 – 1922). If we apply intelligence in cancer prediction in our body with markov chain properties, it will lead us to get following information

1. What should be the next organ will be affected by cancer?
2. Which organ won’t be affected by cancer?
3. What is the probability of the patient saving rate from cancer?
4. What kind of treatment should be given to the patient?

So it helps doctors to give correct treatment to the patients.

Why Markov chains are needed to apply for experimentation like AVS wherein the scenarios are randomized & behavior changes based on the randomness.

Markov decision process (extension of markov chain) is one of the example for reinforcement learning. Which means y = f (x) z, we give x and z to learn function f to generate y similarly y helps to define function f.

It takes decision making in every situation only depend up on current state.

I will explain the Markov decision process as the following example,
Imagine that following table is the grid of places of a city, An AVS need to reach Goal State from Start State by execution actions. If you found the boundary in the given table like (1,1) if you tried to go UP, then you should stay where you are. Similarly, if you tried to go LEFT, then you should stay where you are. similarly, if you tried to go RIGHT, then you can move next place

Actions: UP, DOWN, LEFT, RIGHT
Question:
What is the shortest sequence getting from Start to Goal?
Ans1: UP, UP, RIGHT, RIGHT, RIGHT
Ans2: RIGHT, RIGHT, UP, UP, RIGHT
Both answers are given the correct solution to this problem, so I get the first one.

Markov property,
In order to know the information of the near future (say, at time t+1) the present information at time t matters.
Given a sequence,

The first order of Markov says,

That is, Xt depends only on Xt-1. Therefore, Xt+1 will depend on Xt

The second order of Markov says,

that is, Xt depends only on Xt-1 and Xt-2

• Only present matter
• Stationary (Rules are no Change)

Markov Decision Process
STATES: S
MODEL: T(s,a,s’) ~ P(s’|s,a)
ACTION: A(s), A
REWARDS: R(s), R(s,a), R(s,a,s’)
POLICY: π(s)  –> a
π * (optimal policy)

STATES: S
(from above example there are 12 states available)
States are feature representation of data collected from the environment.
It can be either discrete or continuous.

MODEL: T(s, a, s’) ~ P(s’|s, a)Model or transition model describes that rules of this example to reach goal state, it’s basic function of three variables CURRENT STATE(s), ACTIONS(a), NEW STATE(s’), it will produce the probability of landing up on new state (s’) given that the agent take action (a) in given state (s). It tells, what will happen if you do something in a particular place?

In a deterministic environment, where the probability for any landing state other than the determined one will have zero probability.

For example:

• Determined environment:If you take a certain action, go Up, you will certainly perform that action with probability 1.
• Stochastic environment:If you take same action, go Up, there will certain probability say 0.8 to actually perform the given action and there is 0.1 probability it can perform an action (either Left or Right) perpendicular to the given action, Up. Here for the s state and the Up action transition model, T(s,a,s’) = P(s’|s,Up)=0.8

It follows the first order Markov property. So we can also say that Autonomous Vehicle is also a stochastic environment because AVS is composed of Decision making that is in different states defined by position and speed or other attributes of AVS. Actions performed by each decision making process change their states and cause a change in the AVS.

ACTION: A(s), A
ACTION can be perform at particular state
EX: Up, Down, Left, Right. It can also be either discrete or continuous) A= {Up, Down, Left, Right}

It can be treated as a function of the state, a = A(s), where depending on the state function, it decides which action is possible.

REWARDS: R(s), R(s,a), R(s,a,s’)
The reward of the state quantifies the usefulness of entering into a state. There are three different forms to represent the reward namely, R(s), R(s, a) and R(s, a, s’), but they are all equivalent. The domain knowledge plays an important role in the assignment of rewards for different states as minor changes in the reward do matter for finding the optimal solution to an MDP problem. It is a scalar value.

Ex: This is an example for R(s)
Goal state gets a reward as 1 and Not Goal state get reward value as -1. R(Goal)=1
R(Not Goal) = -1

POLICY: π(s) –>a
The policy is a function that takes the state as an input and outputs the action to be taken. Therefore, the policy is a command that the agent has to obey.

It is a guide telling which action to take for a given state.
π * (optimal policy), which maximizes the expected reward.

From these MDP properties, you get an idea that how to implement MDP (reinforcement learning) to an Autonomous vehicle. For example, An autonomous vehicle operates in an idealized grid-like city, with roads going North-South and East-West. There are traffic signs are as a state (S). Then model or transition model(T) will be telling to AV, what will be the next signs appear on the road to help AV to make decision.

If AV determined the stop sign, then it takes an action(A) that release the throttle and decrease the speed on stop the vehicle. An AV gets rewarded (R) for each successful completion of the trip. From the rewards and penalty AV should learn optimal policy (π *) for driving on a city road, obeying traffic rules correctly, and trying to reach the destination within a goal time.

What are the things need to be done to achieve AVS level 4?

Level 0:           No Automation
Level 1:           Driver Assistance Required
Level 2:           Partial Automation Options Available
Level 3:           Conditional Automation
Level 4:           High Automation
Level 5:           Full Automation

Currently we’re in level 3, Audi claims that the new A8 is the first production car to achieve Level 3 autonomy—not Level 4 as Motor Trend claims. The Audi AI traffic jam pilot can take over the tedious job of creeping through highway traffic jams at speeds below 37 MPH.

Compare with level 3, level 4 is highly automated. Because in level 3, car driver can go far above 37 MPH then the car autonomy will be ruled out. In that kind of situation, the driver needs to take responsibility for the car. So level 4 is Autonomous Vehicles will be able to handle most “dynamic driving tasks” to use SAE International’s terminology.

Which means, Level 4 car can handle most normal driving tasks on its own. But we still need driver intervention from time to time, during poor weather condition. So now, following things need to be done to achieve AVS Level 4,

They are:

1.  A level 3 autonomous car should understand the SAE International’s terminology to move to level 4
2. An AVS level 4, capable of performing all driving function under certain conditions so we need to introduced more real world driving problems with condition to solve by an AV then it moves level 3 to level 4.
3. Make level 3 AVS upgrade into level 4 by introducing reinforcement learning, which means if an AVS already know what traffic signs will appear next then it will aware and make decision on it, so here driver or other people in car treated as cargo!
4. In level 3 AVS, in specific situation and environment AV may not have any interrupt on its way (like highway) then it allows human driver to do whatever if they want. But we need to upgrade this specialization in level 4 as that an AV can drive itself independently in most environment, with some exceptions for weather or unusual environments. Human may still need to take over at time. By introducing RADER, LIDAR, GPS, Digital Cameras, Processors to upgrade level 3 AV into level 4.

Demonstration of how Markov chain is useful in achieving AVS level 4 by considering one the scenarios of road-sign detection & action

An Autonomous Vehicle detects the traffic signs on road by using the camera and classified with the help of kNN algorithm, here it uses Markov chain to predict, what will be a next traffic sign according to the current traffic sign. Here it generates the Markov transition matrix based on Trichy to Madurai national highway in Tamilnadu, India.

By taking high matrix powers gives the limiting distribution is

So following R code explains that Autonomous Vehicle simulated for 100 Kilometers. During that distance, AVS  detect traffic Symbols and their probability of appearing to be next after current traffic signs. Then we increase a sample space to get a more accurate probability of traffic signs will predict next to take a decision by AVS. Compare this simulated result to million step simulation.

markov <- function(init,mat,n,labels) {
if (missing(labels)) labels <- 1:length(init)
simlist <- numeric(n+1)
states <- 1:length(init)
simlist <- sample(states,1,prob=init)
for (i in 2:(n+1))
{ simlist[i] <- sample(states,1,prob=mat[simlist[i-1],]) }
labels[simlist]
}

P <- matrix(c(0,0.8,0.17,0.28,0.28,0,0.02,0.01,0.08,0.08,
0.08,0.02,0.1,0.22,0.18,0.02,0.3,0,0.08,0,
0.02,0.2,0,0.3,0.38,0,0.08,0,0.02,0,
0.22,0.4,0.02,0,0.08,0.09,0,0,0.11,0.08,
0.28,0,0.1,0.1,0.01,0.09,0.1,0.12,0.08,0.12,
0.4,0,0.2,0,0,0.04,0,0.3,0,0.06,
0,0.3,0.01,0,0.07,0.3,0,0.2,0.1,0.02,
0,0,0.2,0,0,0.45,0.25,0,0,0.1,
0,0,0.1,0.05,0,0,0.25,0,0.2,0.4,
0,0,0.1,0.05,0,0.01,0,0.37,0.33,0.14), nrow=10, byrow=TRUE)

rownames(P) <- lab
colnames(P) <- lab

init <- c(1/10,1/10,1/10,1/10,1/10,1/10,1/10,1/10,1/10,1/10) # initial distribution
states <- c(“St”,”Sl”,”Sb”,”Pt”,”Sch”,”M”,”Nr”,”Rw”,”H”,”P”)
# simulate chain for 100 steps
simlist <- markov(init,P,100,states)
simlist
table(simlist)/100
steps <- 1000000
simlist <- markov(init,P,steps,states) table(simlist)/steps

Output:

simlist <- markov(init,P,100,states)
simlist
 “M”   “St”  “P”   “Rw”  “Sb”  “H”   “P”   “H”   “Nr”  “M”   “St”  “Sb”  “Sch” “St”  “Sch” “St”  “Sl”  “Pt”  “H”
 “Nr”  “Sl”  “Sch” “St”  “Sl”  “Sb”  “Sch” “H”   “P”   “Rw”  “M”   “St”  “Nr”  “Sl”  “M”   “Rw”  “M”   “P”   “P”
 “Rw”  “M”   “Rw”  “M”   “Rw”  “M”   “Sb”  “Sl”  “H”   “H”   “P”   “Rw”  “M”   “Rw”  “Nr”  “M”   “St”  “Sb”  “Sch”
 “M”   “Sb”  “Pt”  “H”   “P”   “Rw”  “Nr”  “Rw”  “Nr”  “M”   “St”  “Sl”  “Nr”  “Sl”  “Sch” “Nr”  “Sl”  “Pt”  “St”
 “Sch” “Nr”  “M”   “St”  “Sl”  “Sb”  “Sch” “St”  “Sch” “P”   “P”   “H”   “P”   “H”   “Sb”  “Nr”  “Sl”  “Pt”  “Sl”
 “Nr”  “Sl”  “Nr”  “Rw”  “P”   “Rw”

table(simlist)/100
simlist
H    M   Nr    P   Pt   Rw   Sb  Sch   Sl   St
0.09 0.13 0.12 0.11 0.04 0.12 0.08 0.09 0.12 0.11

steps <- 1000000
simlist <- markov(init,P,steps,states)
table(simlist)/steps
simlist

This methodology helps Autonomous vehicle to take decision early by predicting the future traffic signs on road. It might easy to take decision on identified information.

How AVS also can recognize (image) road-signs to further achieve the level 4?
Autonomous Vehicle uses OpenCV, cameras to recognize road signs with the help of machine learning algorithms. Here, we use the k-Nearest Neighbor algorithm in R to predict the traffic signs. So following steps will explain how kNN helps AV to recognize the road-signs (images).

Recognizing a road sign with kNN
After several trips with a human behind the wheel, it is time for the Autonomous Vehicle to attempt the test course alone. As it begins to drive away, its camera captures the following image:

Can you apply a kNN classifier to help the car recognize this sign?

R Code:
library(class)
library(tidyverse)
traffic_signs <- read_csv(“E:/internship in stepup analytics/extened stochastic model for autonomous vehicle system/traffic_sign3.csv”)
signtype<-traffic_signs\$sign_type
nextsign <- traffic_signs[c(343),c(3:50)]

# Classify the next sign observed
knn(train = traffic_signs[-c(1:2)], test = nextsign, cl = signtype)

Output:
knn(train = traffic_signs[-c(1:2)], test = nextsign, cl = signtype)
 stop

Levels: hospital man_at_work narrow_road pedestrian petrol road_wideness school speed speed_break stop we’ve trained our first nearest neighbor classifier! The AV successfully identified the sign and stopped safely at the intersection. So how did the knn() function correctly classify the stop sign? The answer is that the sign was in some way similar to another stop sign. So kNN isn’t really learning anything; it simply looks for the most similar example.

Exploring the traffic sign dataset
To better understand how the knn() function was able to classify the stop sign, it may help to examine the training dataset is used. Each previously observed street sign was divided into a 4×4 grid, and the red, green, and blue level for each of the 16 center pixels are recorded as illustrated here.

The result is a dataset that records the sign_type as well as 16 x 3 = 48 color properties of each sign.

R Code:

# Examine the structure of the signs dataset
str(traffic_signs)

# Count the number of signs of each type
table(traffic_signs\$sign_type)

# Check r10’s average red level by sign type
aggregate(r10 ~ sign_type, data = traffic_signs, mean)

Output:
aggregate(r10 ~ sign_type, data = traffic_signs, mean)

As you might have expected, stop signs tend to have a higher average red value. This is how kNN identifies similar signs.

Classifying a collection of road signs
Now that the autonomous vehicle has successfully stopped on its own, we feel confident in allowing the car to continue the test course. The test course includes 59 additional road signs divided into ten types:

At the conclusion of the trial, you are asked to measure the car’s overall performance at recognizing these signs.

R code:
# Use kNN to identify the test road signs
signtypes2<-traffic_signs[c(traffic_signs\$sample == “train”),c(2)]
signtypes<-t(signtypes2)
testsigns<-traffic_signs[c(traffic_signs\$sample == “test”),c(3:50)]
trainsigns<-traffic_signs[c(traffic_signs\$sample == “train”),c(3:50)]
signspred <- knn(train = trainsigns, test = testsigns, cl = signtypes)

# Create a confusion matrix of the actual versus predicted values
testsigns2<-traffic_signs[c(traffic_signs\$sample == “test”),c(2:50)]
signsactual <- testsigns2\$sign_type
table(signspred,signsactual)

# Compute the accuracy
mean(signspred == signsactual)

Output:
table(signspred,signsactual)

mean(signspred == signsactual)
 0.9464286

That Autonomous Vehicle is really coming along! The confusion matrix lets you look for patterns in the classifier’s errors.

There is a complex relationship between k and classification accuracy. Bigger is not always better. In such case, with smaller neighborhoods, kNN can identify subtler patterns in the data. So what is a valid reason for keeping k as small as possible (but no smaller)?

Answer is that a smaller k may utilize subtler patterns.

Testing other ‘k’ values
By default, the knn() function in the class package uses only the single nearest neighbor. Setting a k parameter allows the algorithm to consider additional nearby neighbors. This enlarges the collection of neighbors which will vote on the predicted class.

Compare k values of 1, 7, and 15 to examine the impact on traffic sign classification accuracy.

R Code:
# Compute the accuracy of the baseline model (default k = 1)
k_1 <- knn(train = trainsigns, test = testsigns, cl = signtypes)
mean(signsactual == k_1)

# Modify the above to set k = 7
k_7 <- knn(train = trainsigns, test = testsigns, cl = signtypes, k = 7)
mean(signsactual == k_7)

# Set k = 15 and compare to the above
k_15 <- knn(train = trainsigns, test = testsigns, cl = signtypes, k = 15)
mean(signsactual == k_15)

Output:
k_1 <- knn(train = trainsigns, test = testsigns, cl = signtypes)
mean(signsactual == k_1)
 0.9464286

k_7 <- knn(train = trainsigns, test = testsigns, cl = signtypes, k = 7)
mean(signsactual == k_7)
 0.9375

k_15 <- knn(train = trainsigns, test = testsigns, cl = signtypes, k = 15)
mean(signsactual == k_15)
 0.7410714

Which value of k gave the highest accuracy? k_1 and value is 0.9464286

Seeing how the neighbors voted
When multiple nearest neighbors hold a vote, it can sometimes be useful to examine whether the voters were unanimous or widely separated.

For example, knowing more about the voters’ confidence in the classification could allow an autonomous vehicle to use caution in the case there is any chance at all that a stop sign is ahead.

Here, we will learn how to obtain the voting results from the knn() function.

R Code:
# Use the prob parameter to get the proportion of votes for the winning class
signpred <- knn(train = trainsigns, test = testsigns, cl = signtypes, k = 7, prob = TRUE)

# Get the “prob” attribute from the predicted classes
signprob <- attr(signpred, “prob”)

# Examine the first several predictions

# Examine the proportion of votes for the winning class

Output:
 stop stop stop stop stop stop

head(signprob)  1 1 1 1 1 1

Now you can get an idea of how certain our kNN learner is about its classifications.

Before applying kNN to a classification task, it is common practice to rescale the data using a technique like min-max normalization. What is the purpose of this step is to ensure all data elements may contribute equal shares to distance. Rescaling reduces the influence of extreme values on kNN’s distance function.

Conclusion

Above Gartner hype cycle 2018 autonomous vehicles show that autonomous vehicle level 4 is under construction it takes more than 10 years. Currently, Audi A8 introduced AV level 3 in 2018. Through this article, I suggest that it’s one of the ways to applying a stochastic model to the autonomous vehicle, we can apply in a various way depends on the problem. So, I give an idea that what will be the next traffic signs appear on road and in future we will develop the AV to perform decision making for this kind of situation.

References
1. Gartner Inc.
2. Brett Lantz, Data Scientist at the University of Michigan (Datacamp.com)
3. Introduction to stochastic processes with R by Robert  P. Dobrow
4. https://en.wikipedia.org/wiki/List_of_stochastic_processes_topics
5. https://www.datasciencesociety.net/stochastic-processes-and-applications