Actuarial Science: Markov Jump Process Models

MARKOV JUMP PROCESS

Do you wonder how a hospital has just the right number of beds in a hospital ward?

What if too many workers take sick leave at the worksite and company falls short of employees?  How does a company make sure they have the right number of workers at any point in time? 

Well, they use a Markov Jump model to calculate the probability of a sick patient occupying the bed for a certain number of days and to calculate the probability of a worker becoming sick and thus, keep everything in order.

Have you heard of Gambler’s Ruin problem? Markov Jump’s process is at play there.

Do you want to find out how long will you have to wait in a queue before one of the cash counter bills you up at big bazaar? The number of counters can be modeled using the Markov Jump process.

Do you have a similar problem to solve?

Let me show you how Markov Process can be of help.
A stochastic process, possessing Markov property, in continuous time with discrete state space (finite or countable) is a Markov Jump Process.  

But what is a stochastic process?

A stochastic process is a set of ordered random variables, Xt , one for each time t in set J.

The process is denoted {Xt : t J} . 

A stochastic process can have discrete or continuous state space and move in discrete or continuous time-space. State-space implies the values that the process can take and time-space means the time values at which the process is recorded or observed. A table of various stochastic processes with defining properties is presented below.

Now, what is Markov’s property?
Markov property simply means the future evolution of the process can be determined with the current value of the process, independent of past values. Mathematically, This is defined in continuous space,

In discrete space,

So a Markov Jump process is simply a process, taking discrete values, whose future evolution will depend on its current value only and that has continuous state space i.e. it can move at any time. Let’s relate to Markov jump process in a different way, 

You might have been told by someone at some point in your life, “Forget the past. Focus on present and Future”, they were asking you to have Markov property.

The role of a die, the number that will appear on your next roll does not depend on how many fours or ones you have rolled before. Every roll is independent of its previous roll. That is Markov.

Similarly, the tossing of the coin is a Markov process. But let me tell you what is not Markov, Marvel’s Infinity saga. To understand the movie, Avengers Endgame, you need to possess the knowledge of all the previous movies released at different times. It depends on past values of the process; hence it does not have Markov property.

Even when it comes to success, we must be Markov. It is our efforts at present that will determine our future success. The failures in the past cannot overpower the future success. Our future success depends on the hard work at present and not on failures in the past.

I hope Markov Jump process doesn’t seem intimidating anymore.

Let’s get a little more knowledge of the process before proceeding to apply it in R programming.

Markov process is named after the Russian mathematician Andrey Markov. Markov processes are the basis for general stochastic simulation methods known as Markov Chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found extensive application in Bayesian Statistics. A stochastic process, possessing Markov property, in discrete time with discrete state space (finite or countable) is a Markov Chain.  An application of Markov chain is in insurance companies for determining discounts on premiums for motor insurance policies. Depending on the number of accidents in the previous year, a level of discount on premium for the current year is determined for the policyholder. A no-claim-discount (NCD) model is used which is an application of the Markov chain. A model can be used by the shopkeeper to maintain stock of items in inventory.

Markov jump process finds its application in insurance companies for determining premiums for life insurance policies using the Healthy-Sick-Dead model. It is used in pension funds to keep track of employee’s transition from the active state to either ill-health retirement, normal retirement, leaving the company or dead state. The algorithm known as PageRank, which was originally proposed for the internet search engine Google, is based on a Markov process.

A diagram showing states of the process with possible transitions is called a transition diagram. Arrows show the transition from one state to another and the probability of transition is written above or below the arrows.

A Markov jump process can be time-homogeneous and time-inhomogeneous. A time-homogeneous process is one whose probability of moving from one state to another remains the same with time and in time-inhomogeneous process probability changes with time. The probability of changing states in Markov Processes is given by Chapman-Kolmogorov Equations. This probability is called transition probability.

Transition probabilities are of major use to the companies. With the help of Chapman-Kolmogorov equations, there are three ways of determining probabilities:

  • Differential equations (forward and backward) – they can be solved to get the probability.
  • The integrated form of  Kolmogorov backward equation
  •  The integrated form of  Kolmogorov forward equation

In continuous time, we consider transition probability over a very short interval of time h. Dividing by h expresses this as a probability of transition in unit time. Taking limits as

h tends to 0 leads to the concept of a transition rate. Transition rate is also called transition intensity. A matrix of transition intensity is called the Generator matrix and a matrix of transition probabilities is called the Transition probability matrix (TPM). The sum of values in a row in a generator matrix must equal zero while the sum of values in a row of TPM must equal one.

Markov models are studied in Actuarial Subject CS2: Risk modeling and Survival Analysis. The Curriculum 2019 requires the topic to be covered in R programming. R has become vital in actuarial work. So we will see the working of the Markov model in R.

 We can use R studio which is easier to understand than R. Starting with the basic, creation of a transition intensity matrix and probability matrix

Over a small period of time period h, the transition probability matrix(TPM) is given by 
P(h)=1+h*A+ o(h) where,
I is an identity matrix
A is a generator matrix
o(h) is the order of h

We can approximate P(h) as
P(h)= 1+h*A

Type the following code on your R script
A=matrix(c(m11,m21,-m11,-m21),nrow=2,ncol=2) 
I= diag(2)
P=I+h*A

Where m11, m21 are values of transition rates. Now we solve a question in R,

Read more on Actuarial Science: Click

You might also like More from author