Gambler's Ruin

This notebook explains how to use Scientific Python to solve the Gambler's Ruin problem, as an example for an absorbing Markov chain.

We start by importing pylab, the Scientific Python extensions. Then we define the transition matrix $P$ as in the lecture notes.

In [1]:
from pylab import *
P = array([[ 1, 0, 0, 0, 0, 0],
           [ 0, 1, 0, 0, 0, 0],
           [.6, 0, 0,.4, 0, 0],
           [ 0, 0,.6, 0,.4, 0],
           [ 0, 0, 0,.6, 0,.4],
           [ 0,.4, 0, 0,.6, 0]])

The brute-force way to find the absorbing probabilities is to simply raise the matrix $P$ to a high power, e.g., $P^{100}$. We find (rounded to 4 digits after the decimal point):

In [2]:
matrix_power(P,100).round(4)
Out[2]:
array([[1.    , 0.    , 0.    , 0.    , 0.    , 0.    ],
       [0.    , 1.    , 0.    , 0.    , 0.    , 0.    ],
       [0.9242, 0.0758, 0.    , 0.    , 0.    , 0.    ],
       [0.8104, 0.1896, 0.    , 0.    , 0.    , 0.    ],
       [0.6398, 0.3602, 0.    , 0.    , 0.    , 0.    ],
       [0.3839, 0.6161, 0.    , 0.    , 0.    , 0.    ]])

We see that in the long run, we transition into an absorbing state (first two columns) with probability 1 and remain in a non-absorbing state with probability 0.

Let us know solve this problem using a direct linear algebra approach. Writing $$ P = \begin{pmatrix} I & 0 \\ R & Q \end{pmatrix} , $$ where the dimension of $I$ corresponds to the number of absorbing states and the dimension of $Q$ to the number of non-absorbing states, and the dimensions of $R$ and $0$ made to fit, we know (see lecture notes) that the matrix of absorbing probabilities is given by $$ A = N R $$ where $$ N = (I-Q)^{-1} $$ is the so-called fundamental matrix. (The entries of the fundamental matrix are the expected number of times that the chain is in the state of the column index when the initial state is given by the row index.)

From our given matrix $P$, we pick the respective submatrices as follows:

In [3]:
R = P[2:,:2];R
Out[3]:
array([[0.6, 0. ],
       [0. , 0. ],
       [0. , 0. ],
       [0. , 0.4]])
In [4]:
Q = P[2:,2:];Q
Out[4]:
array([[0. , 0.4, 0. , 0. ],
       [0.6, 0. , 0.4, 0. ],
       [0. , 0.6, 0. , 0.4],
       [0. , 0. , 0.6, 0. ]])

Now compute $N=(I-Q)^{-1}$:

In [5]:
N = inv(eye(4)-Q);N
Out[5]:
array([[1.54028436, 0.90047393, 0.47393365, 0.18957346],
       [1.3507109 , 2.25118483, 1.18483412, 0.47393365],
       [1.06635071, 1.77725118, 2.25118483, 0.90047393],
       [0.63981043, 1.06635071, 1.3507109 , 1.54028436]])

Thus, the expected number of times that the chain is in any non-absorbing state is the sum of the elements in each row, which can be computed by right-multiplication with the vector whose elements are all $1$:

In [6]:
N @ ones(4)
Out[6]:
array([3.1042654 , 5.26066351, 5.99526066, 4.5971564 ])

E.g., if the gambler starts with $\$3000$, he will play on average $6$ rounds of Black-Jack.

Now, let's compute the matrix of absorbing probabilities $A=NR$:

In [7]:
A = N @ R;A
Out[7]:
array([[0.92417062, 0.07582938],
       [0.81042654, 0.18957346],
       [0.63981043, 0.36018957],
       [0.38388626, 0.61611374]])

$A$ really is a probability matrix: Its row-sum is always equal to $1$:

In [8]:
A@ones(2)
Out[8]:
array([1., 1., 1., 1.])

We finally compute the expected payoff from the Gambler's Ruin chain. First, we need the payoffs $B$ corresponding to the absorbing probabilities $A$.

In [9]:
B = array([[-1000,4000],[-2000,3000],[-3000,2000],[-4000,1000]])

Then the expected payoff corresponding to initial state $i$ is given by $$ \mathbb{E}[P_i] = \sum_{j=1}^{2} a_{ij} \, b_{ij}. $$ In Python, this can be expressed as follows (note that * denotes element-wise multiplication and @ denotes matrix multiplication):

In [10]:
(A*B)@ones(2)
Out[10]:
array([ -620.85308057, -1052.13270142, -1199.0521327 ,  -919.43127962])

Quite clearly, the house always wins...