Tuesday, December 31, 2013

St Petersburg Game: 1M Trials


  The last post gave the rules for a down-sized version of the St Petersburg Lottery with a maximum of n=4 tosses of a coin. Instead of stopping on "tails" one can flip the coin 4 times each time the game is played ignoring the tosses after the first tail. With 4 tosses there are 16 possible outcomes which we can number 0 through 15 and we can use the digits of the corresponding binary numbers to represent the flips with "0" being a "tail" and "1" a "head". The order of the tosses can be read from right to left and the winnings can be determined and are shown in the following image.


The relative frequencies of an initial run of 0, 1, 2, 3 and 4 heads are seen to be 8, 4, 2, 1 and 1 respectively so the probabilities, 1/2, 1/4, 1/8, 1/16 and 1/16, come out right for the game. N=220 (1M) trials were run and the results shown below are typical.


The mean value for the wins, μ, is about 3 and is approximately equal to the wager, w=3 as expected for a net gain of zero. The statistics for wins and losses shows that the odds against winning anything is 3:1 as predicted. This allows money to be collected to cover the winnings.

  So instead of tossing coins we can spin a wheel with the numbers 0-15 on it once and pay out the winnings in the table each time but we cannot watch them snowball as in la boule de neige in Roulette. A French song, La Boule, is about the boule de neige system for trying to beat the odds in Roulette.

The St Petersburg Game's Win-Loss Ratio


  It is interesting to look at the win-loss ratio for the n-toss St Petersburg game. We start by determining the value k for which the pot equals the wager or when 2k-1 = w and then define κ as this value of k lowered to its nearest integer. The determination of the win-loss ratio is as follows.


The key formulas with 2κ approximately equal to n are,


So as the number of tosses becomes indefinitely large as in the St Petersburg paradox the win-loss ratio approaches zero. The contradiction is that one can expect to win an infinitely large amount while at the same time have relatively no chance of winning. The win-loss ratio needs to be considered in addition to the expected value in order to determine a game's fairness. Games with higher win-loss ratios are more alluring to prospective players.

Friday, December 27, 2013

The St Petersburg Paradox


  The St. Petersburg paradox is presented in Daniel Bernoulli, Exposition of a New Theory on the Measurement of Risk (1738) found in Econometrica, Jan 1954 (see §17 on p. 31). The problem was not first formulated by him and is similar to the wheat and chessboard problem. A coin is repeatedly tossed until a tails first appears. If this occurs on the first toss the player gets 1 coin, on the second he get 2 coins, on the 3rd: 4 coins, etc., with the number of coins increasing exponentially. What is the expected number of coins that he will get for playing this game? Surprisingly the number is infinite.


Instead of allowing the number of tosses go on indefinitely suppose that it is agreed before the start that the player stops on the nth toss. The expected value for the game is then,


Additionally, one can offer a bonus, b, for getting n heads in a row.  If each player has to make a wager, w, to play then the net gain for the kth trial is 2k-1 - w and for n heads in a row it is b - w. The expected value then becomes,


For a fair game the expected value is zero and the expression for the wager is simplified if the bonus is 2n so,


If one sets the limit at 8 tosses of the coin one does not win anything until one gets four tosses in a row. The maximum that one can win is 251 coins for 8 heads in a row.


The paradox is that the expected winnings is infinite for an unlimited number of tosses. Experience will show that the usual outcome for playing is quite low. When the limit on the number of tosses is not too large, the wager is affordable but with larger wagers one can expect to lose more often. The game is one of survival.

Suppose we consider a game of losses instead of winnings. Experience might show that something is profitable based on a limited amount of experience but there are cases where the risks can snowball.

Monday, December 23, 2013

Finding the Stationary Distribution for a Transition Matrix


  By a stationary distribution for a random process with transition matrix P we mean a system state vector π which remains unchanged after operating on it with P, i.e., Pπ=π. Given P, how do can we determine π? A simple method is to compare the definition of the stationary state with the definition of a eigenvector and deduce a relation connecting the state vector with the eigenvector.


Setting the eigenvalue μ=1 makes the two equations nearly identical. So if we can find an eigenvector of P which has an eigenvalue equal to 1 then we can find the direction of π. To find the magnitude in the given direction we use the fact that the sum of the components of π is one to determine the multiplier for e. We can illustrate how this works by determining the stationary state for a simple random walk problem. In the state diagram below q=1-p:


The determination of the stationary state proceeds as follows:


There is only one stationary state for the problem above and we find that each state has equally probable. One can see that more that one stationary state is possible for a given transition matrix by considering a simple cascade problem which starts in state 0 and can end up in either states 1 or 2.


Looking at the eigenvectors for the transition matrix we see that there are two with eigenvalues equal to 1.


Here each column of π is a stationary state corresponding to final states 1 and 2.

Thursday, December 19, 2013

Deviation of the Monthly Global Land Anomalies from the Annual Means


  Last month I posted some information on the rate of change for seasonal anomalies and thought I'd give a little more detail about the calculations involved. One starts with the NOAA Global Land Anomaly and computes the annual averages for the years for which the data is available (1880 to 2012).


One then computes the differences from the monthly averages and sorts the results by month. One can then do a linear fit for each month to get the coefficients for the lines of best fit.


If one looks at the fit for Aug one sees that there is a steady rate of decrease for the 133 years that were used.


As shown in the previous post the rates for the year roughly follows a sinusoidal curve. The maximum rate of change appears to be 0.24 °C per century.


The peak rate of increase in Feb and the rate of decrease in Aug are of the same magnitude for the sinusoidal curve and the average for the curve is 0. The sum of alpha and beta also average to zero since the yearly average is by definition equal to zero. The steady rates of change are evidence of a long term decrease in the difference between the Aug and Feb anomalies.

Thursday, December 12, 2013

General Distribution for a Function of a Normal Variate


  It's fairly easy to show that in general the distribution for a function, f, of a normal variate, x, is not a normal distribution. We only get a normal distribution if the derivative of f is a constant.


We can also find a function that has some of the characteristics of the distribution that we found for the global land anomaly.

g(x) = f'(x)=df/dx



Note that the chosen function changes the rate of change in x for a given change in y at the center of the plot. This results in a compression of the distribution near y=0.

Tuesday, December 3, 2013

The Normal Sum Theorem for a Linear Combination


  The normal sum theorem also works for a linear combination z=ax+by of normal variates x and y. Multiplication of a normal variate by a constant positive number produces another normal variate with proportional changes to the mean and standard deviation so a change in scale will reduce the problem to two cases z = x ± y but the minus sign doesn't affect the joint probability density.


Consequentially the probability density for z in both cases above will be the same after the integration over w irregardless of the sign of y and the variance of z will remain the sum of the two individual variances. For the linear combination var(z) = avar(x) + bvar(y).

Monday, December 2, 2013

Proof of the Normal Sum Theorem


  The reason that I haven't been working on posts lately is that I've been doing random walk simulations in order to determine the relative contribution to the global temperature anomaly. A good introduction to the subject of random walks is Don S. Lemons, An Introduction to Stochastic Processes in Physics. In the book Lemons discusses probability distributions, their properties and gives the normal sum theorem which states that a variable z = x + y which is the sum of two variables with normal distributions also has a normal distribution. His justification for the theorem is that the mean and variance for z is the sum of the means and variances for the two distributions. This lacks some rigor since he doesn't prove that the resulting distribution is actually a normal one. It is not very difficult to do so.

  To simplify the proof we set the mean of the two distributions to zero and let x and y vary independently over their possible values. The density for joint distribution, P(x,y), is defined for an element of area with sides dx and dy and is the product of the two individual normal probability densities. Instead of using combinations of x and y it is easier to use z and w = x - y as our variables and express P(x,y)dxdy in terms of them. The directions for the axes of z and w are perpendicular to each other but at angles to x and y so we find that the element of area has a factor of 1/2 associated with it.


  We next replace x and y in the exponential with their formulas in terms of z and w and simply the result defining the new constants λ, μ and α .


  To get the probability distribution for z we integrate the joint distribution over all values of w. We can simply the integral by factoring out the z2 term in the exponential leaving an integral, I(λ,α,z), which can be further simplified by completing the square and evaluated using the standard formula for the Gaussian integral.


  It is fairly easy to show that φ(z) is a normal distribution with variance σeff2 = σ2 + s2.