Tag Archives: Monte Carlo

Monte Carlo Calculation of pi

Monte Carlo- Random Numbers to Improve Calculations

When one hears “Monte Carlo”, most people might think of something like this:

Monte Carlo

My Mom and I touring Monte Carlo, Monaco.

Monte Carlo, Monaco: known for extremely large amounts of money, car racing, no income taxes,and copious gambling.

In addition to Monaco, Europe, Las Vegas decided to host a Monte Carlo-themed casino as well. So during the Manhattan project, when the best minds in the United States were camped out in the New Mexican desert, they had plenty of inspiration from Las Vegas, and plenty of difficult problems to work on in the form of quantifying the inner workings of nuclei. Enrico Fermi first played with these ideas, but Stanislaw Ulam invented the modern Monte Carlo Markov Chain later.

At the same time, these scientists now had computers at their disposal. John von Neumann programmed Ulam’s algorithm onto ENIAC, Electronic Numerical Integrator and Computer, the very first electronic, general purpose computer, even though it did still run on vacuum tubes.

That still doesn’t answer, why do random numbers actually help us solve problems?

Imagine you are visiting a new city for the first time, maybe Monte Carlo. You only have a day or two, and you want to really get to know the city. You have two options for your visit

  • Hit the tourist sites you researched online
  • Wander around. Try and communicate with the locals. Find an out-of-the-way restaurant and sample food not tamed for foreigners. Watch people interact. Get lost.

Both are legitimate ways to see the city. But depending on what you want, you might choose a different option. The same goes for exploring physics problems. Sometimes you want to go in and study just everything you knew about beforehand, but sometimes you need to wander around, not hit everything, but get a better feeling for what everything might be like.

Buffon’s Needle: Calculation of π

Even back in the 18th century, Georges-Louis Leclerc, Comte de Buffon posed a problem in geometric probability. Nowdays, we use a slightly different version of that problem to calculate π and illustrate Monte Carlo simulations.

Suppose we have a square dartboard, and someone with really bad, completely random aim, even though he/she always at least hits inside the dartboard. We then inscribe a circle inside that dartboard. After an infinity number of hits, what is the ratio between hits in the circle, and hits in the square?

dartboard

Randomly thrown darts than can either be in the circle or not.

Onto the Code!

#Pkg.update()
#Pkg.add("PyPlot")
using PyPlot

We will generate our random numbers on the unit interval. Thus the radius in our circumstance is $.5$.

Write a function incircle(r2) such that if r2 is in the circle, it returns true, else, it returns false. We will use this with the julia function filter. Assume r2 is the radius squared, and already centered around the middle of the unit circle

function incircle(r2)
    if r2<.25
        return true
    else
        return false
    end
end
#The number of darts we will throw at the board.  We will see how accurate different numbers are
N=[10,25,50,75,100,250,500,750,1000,2500,5000,7500,10000];
# We will perform each number multiple times in order to calculate error bars
M=15;
πapprox=zeros(Float64,length(N),M);

for ii in 1:length(N)
    #initialize an array of proper size
    X=zeros(Float64,N[ii],2);
    for jj in 1:M

        #popular our array with random numbers on the unit interval
        rand!(X);

        #calculate their radius squared
        R2=(X[:,1]-.5).^2+(X[:,2]-.5).^2

        # 4*number in circle / total number
        πapprox[ii,jj]=4.*length(filter(incircle,R2))/N[ii];

    end
end
# Get our averages and standard deviations
πave=sum(πapprox,2)/M;
πstd=std(πapprox,2);

Analysis

So that was a nice, short little piece of code. Lets plot it now to see means.

title("Monte Carlo Estimation of π")
ylabel("π estimate")
xlabel("N points")
plot(N,π*ones(N));

for j in 1:M
    scatter(N,πapprox[:,j],marker="o",color="green");
end
ax=gca()
errorbar(N,π*ones(N),yerr=πstd,color="red",fmt="o")
ax[:set_xscale]("log");
ax[:set_xlim]([5,5*10^4]);

Result

Image result. I inverted the color scale though.

When we have fewer numbers of points, our estimates vary much more wildly, and much further from 3.1415926 .
But, at least, the guesses from our different runs all seem equally distributed around the correct value, so it seems we have no systematic error.

As we get up to $10^4$, our estimate starts getting much more accurate and consistent.

title("Dependence of Monte Carlo Error on Number of Points")
ylabel("standard deviation")
xlabel("N points")
semilogx(N,πstd,marker="o");

Result

Image result. Colors tweaked.

So what we guessed in the first plot about dispersion in estimate, we quantify here in this plot. When we only have 10 darts, the guesses vary by up to .3, but when we get down to 1,000 trials, we are starting to be consistent to .0002

title("Overall Averages")
xlabel("N steps")
ylabel("Average of 15 runs")
semilogx(N,π*ones(N));
semilogx(N,πave,marker="o");

Result

Image result. Colors tweaked.

Now lets just make a graphical representation of what we’ve been doing this whole time. Plot our points on unit square, and color the ones inside a circle a different color.

X=zeros(Float64,1000);
Y=zeros(Float64,1000);
rand!(X);
rand!(Y);
R2=(X-.5).^2+(Y-.5).^2;
Xc=[];
Yc=[]
for ii in 1:length(X)
    if R2[ii]<.25
        push!(Xc,X[ii]);
        push!(Yc,Y[ii]);
    end
end

title("The dartboard")
xlim(0,1)
ylim(0,1)
scatter(X,Y);
scatter(Xc,Yc,color="red");

Result

Result. Colors tweaked

That’s all folks!
Now here’s a picture of some pie to congratulate you on calculating π.

pie

By Scott Bauer, USDA ARS – This image was released by the Agricultural Research Service, the research agency of the United States Department of Agriculture, with the ID K7252-47 (next).This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more information.English | français | македонски | +/−, Public Domain, https://commons.wikimedia.org/w/index.php?curid=264106

Monte Carlo Markov Chain

Prerequisites: Monte Carlo Calculation of pi

Author: Christina Lee

Intro

If you didn’t check it out already, take a look at the post that introduces using random numbers in calculations. Any such simulation is a Monte Carlo simulation. The most used kind of Monte Carlo simulation is a Markov Chain, also known as a random walk, or drunkard’s walk. A Markov Chain is a series of steps where

  • each new state is chosen probabilitically

  • the probabilities only depend on the current state, no memory.

Imagine a drunkard trying to walk. At any one point, they could progress either left or right rather randomly. Also, just because they had been traveling in a straight line so far does not guaruntee they will continue to do. They’ve just had extremely good luck.

We use Markov Chains to approximate probability distributions.

To create a good Markov Chain, we need

  • Ergodicity: All states can be reached

  • Global Balance: A condition that ensures the proper equilibrium distribution

The Balances

Let $pi_i$ be the probability that a particle is at site $i$, and $p_{ij}$ be the probability that a particle moves from $i$ to $j$. Then Global Balance can be written as,

In non-equation terms, this says the amount of “chain” leaving site $i$ is the same as the amount of “chain” entering site $i$ for every site in equilibrium. There is no flow.

Usually though, we actually want to work with a stricter rule than Global Balance, Detailed Balance , written as

Detailed Balance further constricts the transition probabilities we can assign and makes it easier to design an algorithm. Almost all MCMC algorithms out there use detailed balance, and only lately have certain applied mathematicians begun looking and breaking detailed balance to increase efficiency in certain classes of problems.

Today’s Test Problem

I will let you know now; this might be one of the most powerful numerical methods you could ever learn. I was going to put down a list of applications, but the only limit to such a list is your imagination.

Today though, we will not be trying to predict stock market crashes, calculate the PageRank of a webpage, or calculate the entropy a quantum spin liquid at zero temperature. We just want to calculate an uniform probability distribution, and look at how Monte Carlo Markov Chains behave.

  • We will start with an $ltimes l$ grid

  • Our chain starts somewhere in that grid

  • We can then move up, down, left or right equally

  • If we hit an edge, we come out the opposite side


First question! Is this ergodic?

Yes! Nothing stops us from reaching any location.

Second question! Does this obey detailed balance?

Yes! In equilibrium, each block has a probability of $pi_i = frac{1}{l^2}$, and can travel to any of its 4 neighbors with probability of $p_{ij} = frac{1}{4}$. For any two neighbors

and if they are not neighbors,

#Pkg.add("PyCall")
using PyPlot

This is just the equivalent of mod
for using in an array that indexes from 1.

function armod(i,j)
    return (mod(i-1+j,j)+1)
end

Input the size of the grid

l=3;
n=l^2;
function Transition(i)
    #randomly chose up, down, left or right
    d=rand(1:4);
    if d==1 #if down
        return armod(i-l,n);

    elseif d==2 #if left
        row=convert(Int,floor((i-1)/l));
        return armod(i-1,l)+l*row;

    elseif d==3 #if right
        row=convert(Int,floor((i-1)/l));
        return armod(i+1,l)+l*row;

    else  # otherwise up
        return armod(i+l,n);
    end
end

The centers of blocks will be used for pictoral purposes

pos=zeros(Float64,2,n);
pos[1,:]=[floor((i-1)/l) for i in 1:n]+.5;
pos[2,:]=[mod(i-1,l) for i in 1:n]+.5;
# How many timesteps
tn=1000;

# Array of timesteps
ti=Array{Int64,1}()
# Array of errors
err=Array{Float64,1}()

# Stores current location, initialized randomly
current=rand(1:n);
# Stores last location, used for pictoral purposes
last=current;

#Keeps track of where chain went
Naccumulated=zeros(Int64,l,l);

# put in our first point
# can index 2d array as 1d
Naccumulated[current]+=1;

# A graph will open up where we can watch the chain
pygui(true)
w, h = plt[:figaspect](.35)
figure(figsize=(w,h))

for ii in 1:tn

    last=current;
    # Determine the new point
    current=Transition(current);
    Naccumulated[current]+=1;

    # add new time steps and error points
    push!(ti,ii)
    push!(err,round(std(Naccumulated/ii),5))


    clf() #clean previous figure

    subplot(121) #subplot 1

    title("t: $ii std: $(err[end])")
    pcolor(Naccumulated/ii-1/n,cmap="RdBu",vmin=-.1,vmax=.1)
    colorbar()

    # An arrow from the previous point to the current point
    annotate("",
    xy=pos[:,current],
    xytext=pos[:,last],
    xycoords="data",
    arrowprops=Dict("facecolor"=>"green"))

    subplot(122) #subplot 2
    title("Error")
    xlabel("Step")
    ylabel("Log std")
    scatter(ti,log10(err))

    #how long to wait between showing steps (in seconds)
    sleep(.5)
end
Naccumulated=Naccumulated/tn;
pygui(false)
pcolor(Naccumulated/tn-1/n,cmap="RdBu",vmin=-.02,vmax=.02)
colorbar()

Grid

A 5×5 grid after 2,000 steps.

title("Error for a 5x5")
xlabel("Step")
ylabel("Log std")
scatter(ti,log10(err))

Error over time

The decreasing standard deviation from 1/25 shown in log base 10.

So, after running the above code and trying to figure out how it works, go back and study some properties of the system.

  • How long does it take to forget it’s initial position?

  • How does the behaviour change with system size?

  • How long would you have to go to get a certain accuracy? …. especially if you didn’t know what distribution you where looking for

So hopefully you enjoyed this tiny introduction to an incredibly rich subject. Feel free to explore all the nooks and crannies to really understand the basics of this kind of simulation, so you can gain more control over the more complex simulations.

Monte Carlo simulations are as much of an art as a science. You need to live them, love them, and breathe them till you find out exactly why they are behaving like little kittens that can finally jump on top of your countertops, or open your bedroom door at 1am.

For all their mishaving, you love the kittens anyway.