Friday, January 24, 2014

More Compaction for the Cat and Mouse Problem


  If one looks at the numbers for P in the 2nd compacted state diagram in the last post one sees that there is even more symmetry present since P12=P21 and P32=P31. This allows us to combine states 1 and 2 to get the P'11 and P'21 of a simpler transition matrix. The probabilities of the two original states add to get the probability of state 1 in the second diagram.


The corresponding transition matrix is 2x2 and the calculation again produces the same results as before.


Summing just over the survival states results in the sum of an infinite geometric series with r=1/2 and a sum equal to 2 which already has been noted.

Compacting the Cat & Mouse State Diagram and Transition Matrix


  The symmetry in the Cat & Mouse state diagram allows us to combine states with similar transitions. If we fold the original state diagram along a horizontal line through the center we can combine some of the (cat, mouse) positions into states which represent sets of the positions. So (1,2) and (1,3) become state 1, (2,1) and (2,3) become state 2 and (3,1) and (3,2) become state 3. The "exit" state is relabeled state 4. The rules that apply to the determination of survival after a change in positions remain unchanged.


Using the new state diagram assuming random changes we can fill in the components of the 4x4 transition matrix, P. The initial state and survival vector each have four entries. The calculations give similar results to those previously found and the expected survival time is 2 states following in initial state.


Again looking at the transitions between states in the diagram above we see that we can compact the diagram a little more due to the symmetry for states 1 and 3. Folding the diagram along a vertical line through the center gives a diagram with just 3 states.


The new state diagram can be used to fill in the transition matrix and we get similar results for the calculations.


By compacting the state diagram and mixing the states we lose information about the relative positions of the cat and mouse but the calculations come out the same. As in the original Cat and Mouse problem we could simplify by summing over just the survival states and replace the infinite sum Σk Pk with (I-P)-1. If the initial positions are equally likely the initial probability for a state is proportional to the number of positions included in the state. So for the first diagram above the probability of starting in state 1 is 2/6 or 1/3 but in the second diagram it is 4/6 or 2/3. In either diagram the probability of starting in state 2 would be 1/3.

Tuesday, January 14, 2014

Cat & Mouse


  I found a cat and mouse problem in an article on Wikipedia that was used to illustrate the concept of a stochastic matrix and thought I would do a similar problem of my own and show how it can be solved using the transition matrix and initial state.

  In a 3 room apartment a cat is stalking a mouse. Neither is aware of the other's location and move to neighboring rooms at random. The mouse has an advantage in that it can go through a mouse hole connecting rooms 1 and 3 while the cat cannot. The cat captures the mouse if they end up in the same room or go through the same door. What is the expected survival time of the mouse if they start out in different rooms? Here's a floor plan.


  As in the article one can specify the survival states by the pair of positions (cat, mouse) with the values being 1, 2 and 3. There are 9 possible combinations 6 of which are survival states. The state diagram shows the allowed movements and events for ending the game.


The cat and mouse move randomly so each path leaving a state has equal probability. In the transition matrix P the columns correspond to an existing state and the rows to the next state. In the diagram and the first column we see that state 1 can change to states 2 and 7 with equal probability of 1/2. The second column shows that the second state has four paths leaving it and they are each are assigned a probability of 1/4. One can use the state diagram to fill in the rest of transition matrix. There is no way of leaving the final state so it just goes back to itself.


We can use 1 to indicate survival and 0 to indicate that the mouse has perished. The first calculation shows that the if the mouse is in any surviving state is can expect to survive two states on average. The zero at the end is for the terminal state. The last state included in the sum is 100 and approximates survival over an infinite number of steps. Powers of P are used to find the relative probabilities of a state after k steps. We see that the probability of survival after k steps is 1/2k. This is an infinite geometric series and it's sum with r=1/2 is 1/(1-r)=2. The checks with the formula used for the expected value of survival. An infinite sum has to be approximated since we can't use the inverse of I-P to evaluate the sum of the matrix series. I-P doesn't have an inverse because the sum of all the columns is one making one line in P a linear combination of all the others.

Sunday, January 12, 2014

Transition Matrix Parameters for n States


  The solution for a transition matrix given in the last blog can be extended for an arbitrary number of dimensions. The constraints make some of the components redundant allowing us to solve a reduced set of equations and variables.


We find that the number of parameters is one less than the number of elements in P. This appears to be due to a extra constraint on the selection of the partial derivatives which results from p and p' lying on a plane normal to the vector Σ.



Friday, January 10, 2014

Finding a Transition Matrix to Fit Given Data


  The transition matrix is used to find changes in the states of a stochastic system. The reverse problem, using the changes in the states to determine the transition matrix, is a little more difficult. The sum constraints on the probabilities simplifies the problem to one equation with two unknowns. To solve for the unknown components of P we need two data points represented by α and α' and their images β and β'.


We find that the transition matrix is dependent on three parameters corresponding to the initial point and its image and the relative change in the probabilities of the states.  A simple example helps to illustrate the process and clarify the definitions.


We see that the solution for the transition matrix gives the correct image, p<1>, of p<0> and the calculated relative change ρ' = ρ.

Monday, January 6, 2014

Bayes' Theorem


  Given an initial probability distribution for a stochastic process one can use the transition matrix to compute a change in the probability distribution. If one tries to go backwards one finds that the inverse matrix has numbers outside the interval [0,1] and so cannot be interpreted as a probability matrix. Bayes' Theorem provides a solution to this problem. Suppose we start with a simple transition diagram.


Focusing on the states marked in red we see that both initial states contribute to the final state s'0. We can view the transition matrix as mixing events from the two sources. Suppose N0 events come from s0 and N1 come from s1 making a total for s'0 of N'0=N0+N1. Bayes' theorem tell us the probability that s0 was responsible for some event in s'0 is P'0,0=N0/N'0=P0,0 p0/p'0 and likewise the probability that s1 was responsible is P'0,1=N1/N'0=P0,0 p0/p'0. Doing this for all the paths in the diagram gives a set of inverse probabilities, P'j,k. Factoring out the components of p and p' into separate matrices and generalizing to n states gives a matrix expression for P' which can be generalized for n states .


D(p) is a matrix whose non-zero components along the diagonal are the components of p. We can show how this works with a sample calculation. A check shows that P' has the properties of a probability matrix. The sum of each column is 1 and we see that p=P'p'.


Given two sets of data for p and p' we see from the argument above that P=p'p-1 and a check of the column vectors in p and p' shows p'=Pp. One needs to be careful about the inverse probabilities P' since they are dependent on both p and p' so the P' for the first columns of p and p' is not the same as that for the second columns. The inverse probabilities are not determined by P alone.

Wednesday, January 1, 2014

Lady Luck


May Lady Luck smile upon you in 2014.