Tag Archives: Numerics

Runge-Kutta Methods

Ordinary Differential Equation Solvers: Runge-Kutta Methods

Christina Lee

So what’s an Ordinary Differential Equation?

Differential Equation means we have some equation (or equations) that have derivatives in them.

The ordinary part differentiates them from partial differential equations, the ones with curly $\partial$ derivatives. Here, we only have one independent variable, let’s call it $t$, and one or more dependent variables, let’s call them $x_1, x_2, …$. In partial differential equations, we can have more than one independent variable.

This ODE can either be written as a system of the form

or a single n’th order ODE of the form

that can be rewritten in terms of a system of first order equations by performing variable substitutions such as

Though STEM students such as I have probably spent thousands of hours pouring of ways to analytically solve both ordinary and partial differential equations, unfortunately, the real world is rarely so kind as to provide us with an exactly solvable differential equation. At least for interesting problems.

We can sometimes approximate the real world as an exactly solvable situation, but for the situation we are interested in, we have to turn to numerics. I’m not saying those thousand different analytic methods are for nothing. We need an idea ahead of time of what the differential equation should be doing, to tell if it’s misbehaving or not. We can’t just blindly plug and chug.

Today will be about introducing four different methods based on Taylor expansion to a specific order, also known as Runge-Kutta Methods. We can improve these methods with adaptive stepsize control, but that is a topic for another time, just like the other modern types of solvers such as Richardson extrapolation and predictor-corrector.

Nonetheless, to work with ANY computational differential equation solver, you need to understand the fundamentals of routines like Euler and Runge-Kutta, their error propagation, and where they can go wrong. Otherwise, you might misinterpret the results of more advanced methods.

WARNING: If you need to solve a troublesome differential equation for a research problem, use a package, like DifferentialEquations. These packages have much better error handling and optimization.

Let’s first add our plotting package and colors.

Pkg.update()
Pkg.add("Gadfly")
Pkg.add("Colors")
Pkg.add("Lazy")
using Gadfly
# Solarized Colors that I like working with
# http://ethanschoonover.com/solarized
using Colors
base03=parse(Colorant,"#002b36");
base02=parse(Colorant,"#073642");
base01=parse(Colorant,"#586e75");
base00=parse(Colorant,"#657b83");
base0=parse(Colorant,"#839496");
base1=parse(Colorant,"#839496");
base2=parse(Colorant,"#eee8d5");
base3=parse(Colorant,"#fdf6e3");

yellow=parse(Colorant,"#b58900");
orange=parse(Colorant,"#cb4b16");
red=parse(Colorant,"#dc322f");
magenta=parse(Colorant,"#d33682");
violet=parse(Colorant,"#6c71c4");
blue=parse(Colorant,"#268bd2");
cyan=parse(Colorant,"#3aa198");
green=parse(Colorant,"#859900");

We will be benchmarking our solvers on one of the simplest and most common ODE’s,

Though this only has one dependent variable, we want to structure our code so that we can accommodate a series of dependent variables, $y_1,y_2,…,y_n$, and their associated derivative functions. Therefore, we create a function for each dependent variable, and then push it into an array declared as type Function.

function f1(t::Float64,x::Array{Float64,1})
    return x[1]
end
f=Function[]
push!(f,f1)

Euler’s Method

Stepping

Approximating the slope each step.

First published in Euler’s Instutionum calculi integralis in 1768, this method gets a lot of milage, and if you are to understand anything, this method is it.

We march along with step size $h$, and at each new point, calculate the slope. The slope gives us our new direction to travel for the next $h$.

We can determine the error from the Taylor expansion of the function

In case you haven’t seen it before, the notation $\mathcal{O}(x)$ stands for “errors of the order x”.
Summing over the entire interval, we accumuluate error according to

,

making this a first order method. Generally, if a technique is $n$th order in the Taylor expansion for one step, its $(n-1)$th order over the interval.

function Euler(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
    d=length(f)
    xp=copy(x)
    for ii in 1:d
        xp[ii]+=h*f[ii](t0,x)
    end

    return t0+h,xp
end

Implicit Method or Backward Euler

If $f(t,x)$ has a form that is invertible, we can form a specific expression for the next step. For example, if we use our exponential,

This expression varies for each differential equation and only exists if the function is invertible.

function Implicit(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
    return t0+h,x[1]/(1-h)
end

2nd Order Runge-Kutta

So in the Euler Method, we could just make more, tinier steps to achieve more precise results. Here, we make bettter steps. Each step itself takes more work than a step in the first order methods, but we win by having to perform fewer steps.

This time, we are going to work with the Taylor expansion up to second order,

Define

so that we can write down the derivative of our $f$ function, and the second derivative (curvature), of our solution,

Plugging this expression back into our Taylor expansion, we get a new expression for $x_{n+1}$

We can also interpret this technique as using the slope at the center of the interval, instead of the start.

function RK2(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
    d=length(f)

    xp=copy(x)
    xk1=copy(x)

    for ii in 1:d
        xk1[ii]+=f[ii](t0,x)*h/2
    end
    for ii in 1:d
        xp[ii]+=f[ii](t0+h/2,xk1)*h
    end

    return t0+h,xp
end

4th Order Runge-Kutta

Wait! Where’s 3rd order? There exists a 3rd order method, but I only just heard about it while fact-checking for this post. RK4 is your dependable, multi-purpose workhorse, so we are going to skip right to it.

I’m not going to prove here that the method is fourth order, but we will see numerically that it is.

Note: I premultiply the $h$ in my code to reduce the number of times I have to multiply $h$.

function RK4(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
    d=length(f)

    hk1=zeros(x)
    hk2=zeros(x)
    hk3=zeros(x)
    hk4=zeros(x)

    for ii in 1:d
        hk1[ii]=h*f[ii](t0,x)
    end
    for ii in 1:d
        hk2[ii]=h*f[ii](t0+h/2,x+hk1/2)
    end
    for ii in 1:d
        hk3[ii]=h*f[ii](t0+h/2,x+hk2/2)
    end
    for ii in 1:d
        hk4[ii]=h*f[ii](t0+h,x+hk3)
    end

    return t0+h,x+(hk1+2*hk2+2*hk3+hk4)/6
end

This next function merely iterates over a certain number of steps for a given method.

function Solver(f::Array{Function,1},Method::Function,t0::Float64,
        x0::Array{Float64,1},h::Float64,N::Int64)
    d=length(f)
    ts=zeros(Float64,N+1)
    xs=zeros(Float64,d,N+1)

    ts[1]=t0
    xs[:,1]=x0

    for i in 2:(N+1)
        ts[i],xs[:,i]=Method(f,ts[i-1],xs[:,i-1],h)
    end

    return ts,xs
end
N=1000
xf=10
t0=0.
x0=[1.]
dt=(xf-t0)/N

tEU,xEU=Solver(f,Euler,t0,x0,dt,N);
tIm,xIm=Solver(f,Implicit,t0,x0,dt,N);
tRK2,xRK2=Solver(f,RK2,t0,x0,dt,N);
tRK4,xRK4=Solver(f,RK4,t0,x0,dt,N);

xi=tEU
yi=exp(xi);

errEU=reshape(xEU[1,:],N+1)-yi
errIm=reshape(xIm[1,:],N+1)-yi
errRK2=reshape(xRK2[1,:],N+1)-yi;
errRK4=reshape(xRK4[1,:],N+1)-yi;
plot(x=tEU,y=xEU[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=3pt))

lEU=layer(x=tEU,y=xEU[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=3pt))

lIm=layer(x=tIm,y=xIm[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=yellow,
default_point_size=3pt))

lRK2=layer(x=tRK2,y=xRK2[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=cyan,
default_point_size=2pt))

lRK4=layer(x=tRK4,y=xRK4[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=violet,
default_point_size=4pt))

lp=layer(x->e^x,-.1,10,Geom.line,Theme(default_color=red))


plot(lp,lEU,lIm,lRK2,lRK4,
Guide.manual_color_key("Legend",["Euler","Implicit","RK2","RK4","Exact"],
[green,yellow,cyan,violet,red]),
Coord.cartesian(xmin=9.5,xmax=10.1))

lEU=layer(x=xi,y=errEU,Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=1pt))

lIm=layer(x=xi,y=errIm,Geom.point,
Theme(highlight_width=0pt,default_color=yellow,
default_point_size=1pt))

lRK2=layer(x=xi,y=errRK2,Geom.point,
Theme(highlight_width=0pt,default_color=cyan,
default_point_size=1pt))

lRK4=layer(x=xi,y=errRK4,Geom.point,
Theme(highlight_width=0pt,default_color=violet,
default_point_size=1pt))

plot(lEU,lIm,lRK2,lRK4,Scale.y_asinh,
Guide.manual_color_key("Legend",["Euler","Implicit","RK2","RK4"],
[green,yellow,cyan,violet]))

Scaling of the Error

I talked above about the error scaling either as $h,h^2$, or $h^4$. I won’t just talk but here will numerically demonstrate the relationship as well. For a variety of different step sizes, the below code calculates the final error for each method. Then we will plot the error and see how it scales.

t0=0.
tf=1.
dx=tf-t0
x0=[1.]

dt=collect(.001:.0001:.01)

correctans=exp(tf)
errfEU=zeros(dt)
errfIm=zeros(dt)
errfRK2=zeros(dt)
errfRK4=zeros(dt)



for ii in 1:length(dt)
    N=round(Int,dx/dt[ii])
    dt[ii]=dx/N

    tEU,xEU=Solver(f,Euler,t0,x0,dt[ii],N);
    tIm,xIm=Solver(f,Implicit,t0,x0,dt[ii],N);
    tRK2,xRK2=Solver(f,RK2,t0,x0,dt[ii],N);
    tRK4,xRK4=Solver(f,RK4,t0,x0,dt[ii],N);

    errfEU[ii]=xEU[1,end]-correctans
    errfIm[ii]=xIm[1,end]-correctans
    errfRK2[ii]=xRK2[1,end]-correctans
    errfRK4[ii]=xRK4[1,end]-correctans
end
lEU=layer(x=dt,y=errfEU,Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=1pt))

lIm=layer(x=dt,y=errfIm,Geom.point,
Theme(highlight_width=0pt,default_color=yellow,
default_point_size=1pt))

lRK2=layer(x=dt,y=errfRK2*10^5,Geom.point,
Theme(highlight_width=0pt,default_color=cyan,
default_point_size=1pt))

lRK4=layer(x=dt,y=errfRK4*10^10,Geom.point,
Theme(highlight_width=0pt,default_color=violet,
default_point_size=1pt))

tempEU(x)=(errfEU[end]*x/.01)
tempIm(x)=(errfIm[end]*x/.01)
#le=layer([tempEU,tempIm],0,.01,Geom.line,Theme(default_color=base01))
le=layer(tempEU,0,.01,Geom.line,Theme(default_color=base01))
lei=layer(tempIm,0,.01,Geom.line,Theme(default_color=base01))


temp2(x)=(errfRK2[end]*(x/.01)^2*10^5)
l2=layer(temp2,0,.01,Geom.line,Theme(default_color=base00))

temp4(x)=(errfRK4[end]*(x/.01)^4*10^10)
l4=layer(temp4,0,.01,Geom.line,Theme(default_color=base00))

xl=Guide.xlabel("h")
ylrk2=Guide.ylabel("Error e-5")
ylrk4=Guide.ylabel("Error e-10")
yl=Guide.ylabel("Error")

pEU=plot(lEU,lIm,le,lei,xl,yl,Guide.title("Euler and Implicit, linear error"))
p2=plot(lRK2,l2,xl,ylrk2,Guide.title("RK2, error h^2"))
p4=plot(lRK4,l4,xl,ylrk4,Guide.title("RK4, error h^4"))
vstack(hstack(p2,p4),pEU)

Arbitrary Order

While I have presented 4 concrete examples, many more exist. For any choice of variables $a_i, \beta_{i,j},a_i$ that fulfill

with

we have a Runge-Kutta method of order $p$, where $p\geq s$. The Butcher tableau provides a set of consistent coefficients.

These routines are also present in the M4 folder in a module named diffeq.jl. For later work, you may simply import the module.

Stay tuned for when we tuned these routines to the stiff van der Pol equations!

Jacobi Transformation of a Symmetric Matrix

Based on Numerical Recipes in C++, Sec 11.1

So you want to diagonalize a matrix, do you?
Well, if you have a tiny symmetric matrix, you REALLY want to write up the algorithm by hand, and don’t want to spend much time trying to understand the algorithm, then you have come to the right place.

Otherwise, use LAPACK/BLAS to call a highly optimized routine that can work extremely quickly on large matrices. Julia has those libraries built in already. Even if you do call those matrices, you can make them work better by understanding what’s going on underneath the hood, which is why we are going through this now.

Start with a base rotation matrix of the Form

From our starting arbitrary symmetric ,

we will run a series of transformations,

where each iteration brings closer to diagonal form. Thus in our implementing our algorithm, we need to determine two things

  • The values of and
  • The pattern of sweeping and

And in the end we will need to finally determine if this actually converges, and if has any sort of efficiency.

So lets expand one transformation, and we if we can solve for $c$ and $s$.

Determining $s$ and $c$

Given we specifically want $a^{prime}_{pq}$ to be zero, we re-arrange the last equation,

At first glance, this equation might not look easier to solve for $s$ or $c$. At second glance either. We define a new parameter $t = s/c$, which now makes the equation,

now quite easily solvable by our friendly quadratic formula. Though the book does recommend using a form that pulls out smaller roots through

Then reverse solve back to

Though we could use the expressions above, if we simplify them with our new expressions for $c$ and $s$ analytically, we reduce computational load and round off error. These new expressions are

with the new variable

Convergence

The sum of the squares of the off diagonal elements, choosen in either upper or lower triangles arbitrarily,

Eigenvectors

By forming a product of every rotation matrix, we also come to approximate the matrix $V$ where

and $D$ is the diagonal form of $A$. $V$ is computed through iterative computation

Enough with the talking! LETS COMPUTE STUFF

# First, Lets make our nice, helpful functions

## A function to look at the convergence
function convergence(A::Array)
    num=0.0
    l=size(A)[1]
    for ii in 1:(l-1)
        for jj in (ii+1):l ## just looking at the lower triangle
            num+=A[ii,jj]^2
            #println(ii,' ',jj,' ',num,' ',A[ii,jj])
        end
    end
    return num
end

This makes a matrix easier to look at than when its filled
with 1.043848974e-12 everywhere

function roundmatrix(A::Array,rtol::Real)
    Ap=copy(A)
    for ii in 1:length(A)
        if abs(Ap[ii])<rtol
            Ap[ii]=0
        end
    end
    return Ap;
end
## Here we create a random symmetric matrix
function makeA(n::Int)
    A=randn(n,n);
    for ii in 1:n
        A[ii,1:ii]=transpose(A[1:ii,ii])
    end
    V=eye(n) #initializing the orthogonal transformation
    return A,copy(A),V
end
## One A returned will be stored to compare initial and final

Now on to the rotations!

We don’t always want to compute the eigenvectors, so those are in the optional entries slot.
Both tell the function to compute the vectors with computeV=true
and input the V=V after the semicolon.

function Rotate(A::Array,p::Int,q::Int; computeV=false, V::Array=eye(1))
    θ=(A[q,q]-A[p,p])/(2*A[p,q]);
    t=sign(θ)/(abs(θ)+sqrt(θ^2+1));

    c=1/sqrt(t^2+1)
    s=t*c
    τ=s/(1+c)

    l=size(A)[1]
    Ap=copy(A[:,p])
    Aq=copy(A[:,q])
    for r in 1:l
        A[r,p]=Ap[r]-s*(Aq[r]+τ*Ap[r])
        A[r,q]=Aq[r]+s*(Ap[r]-τ*Aq[r])

        A[p,r]=A[r,p]
        A[q,r]=A[r,q]
    end
    A[p,q]=0
    A[q,p]=0
    A[p,p]=Ap[p]-t*Aq[p]
    A[q,q]=Aq[q]+t*Aq[p]

    if computeV==true
        Vp=copy(V[:,p])
        Vq=copy(V[:,q])
        for r in 1:l
            V[r,p]=c*Vp[r]-s*Vq[r]
            V[r,q]=s*Vp[r]+c*Vq[r]
        end
        return A,V
    else
        return A;
    end
end

This function performs one sweep


function Sweep(A;compV=false,V=eye(1))
    n=size(A)[1]
    for ii in 2:n
        for jj in 1:(ii-1) ## Just over one triangle
            if compV==false
                A=Rotate(A,ii,jj)
            else
                A,V=Rotate(A,ii,jj;computeV=true,V=V);
            end
        end
    end

    if compV==false
        return A
    else
        return A,V
    end
end

Just creating some size of matrix

A,A0,V=makeA(5);
## keep evaluating for a couple iterations
## watch how it changes
A,V=Sweep(A;compV=true,V=V);
roundmatrix(A,1e-10),A,V,convergence(A)

This output is after several sweeps

    (
    5x5 Array{Float64,2}:
     -1.59942  0.0       0.0       0.0      0.0
      0.0      1.03678   0.0       0.0      0.0
      0.0      0.0      -0.823094  0.0      0.0
      0.0      0.0       0.0       3.09433  0.0
      0.0      0.0       0.0       0.0      1.3409,

    5x5 Array{Float64,2}:
     -1.59942      5.1314e-30    2.32594e-36  -9.54088e-49  -1.22782e-53
      5.1314e-30   1.03678       2.65014e-38   9.13791e-56   6.64996e-67
      2.32594e-36  2.65014e-38  -0.823094     -9.56652e-61   2.08002e-92
     -9.54088e-49  9.13791e-56  -9.56652e-61   3.09433       0.0
     -1.22782e-53  6.64996e-67   2.08002e-92   0.0           1.3409     ,

    5x5 Array{Float64,2}:
      0.0537334   0.0599494  -0.735228  0.139      0.658511
      0.310018    0.612957   -0.14049   0.611348  -0.367001
      0.759653   -0.475834    0.264118  0.282571   0.216575
     -0.480405   -0.546544   -0.132383  0.644217  -0.194831
     -0.305189    0.30913     0.593648  0.33477    0.588905,

    2.6331310238375346e-59)

Compare the Optimized LAPLACK routine to your results

eig(A0)
      ([-1.599424470672961,-0.8230937166650976,1.0367806031602211,
      1.3408963512476402,3.0943321944116593],
      5x5 Array{Float64,2}:
       -0.0537334   0.735228   0.0599494  -0.658511  -0.139
       -0.310018    0.14049    0.612957    0.367001  -0.611348
       -0.759653   -0.264118  -0.475834   -0.216575  -0.282571
        0.480405    0.132383  -0.546544    0.194831  -0.644217
        0.305189   -0.593648   0.30913    -0.588905  -0.33477 )
## A good check to make sure V is an orthonomal transformation
roundmatrix(V*A*transpose(V)-A0,1e-12)
      5x5 Array{Float64,2}:
       0.0  0.0  0.0  0.0  0.0
       0.0  0.0  0.0  0.0  0.0
       0.0  0.0  0.0  0.0  0.0
       0.0  0.0  0.0  0.0  0.0
       0.0  0.0  0.0  0.0  0.0

How long does it take to make a Sweep?
How much memory will the computation take?
This is dependent on how large the matrix is, and determines whether or not we
want to use this algorithm.


A,A0,V=makeA(10);
@time Sweep(A);
A,A0,V=makeA(20);
@time Sweep(A);
A,A0,V=makeA(100);
@time Sweep(A);
  0.000028 seconds (320 allocations: 30.469 KB)
  0.000099 seconds (1.33 k allocations: 187.266 KB)
  0.007413 seconds (34.66 k allocations: 17.448 MB, 14.20% gc time)

In addition to time per sweep, we need to know how many sweeps we need to run. So again we run it on a 10×10, 20×20, and 100×100. The efficiency of the algorithm would get a lot worse if we have to sweep the 100×100 a bunch of times.

A10,Ap10,V=makeA(10);
A20,Ap20,V=makeA(20);
A100,Ap100,V=makeA(100);
nsweep=collect(1:7);
conv10=zeros(7)
conv20=zeros(7)
conv100=zeros(7)
for i in nsweep
    A10=Sweep(A10)
    A20=Sweep(A20)
    A100=Sweep(A100)
    conv10[i]=convergence(A10)
    conv20[i]=convergence(A20)
    conv100[i]=convergence(A100)
end

[nsweep conv10/10 conv20/20 conv100/100]
7x4 Array{Float64,2}:
 1.0  1.10944       2.43759      14.6644
 2.0  0.105628      0.312076      2.87182
 3.0  0.000265288   0.017073      0.498082
 4.0  6.64324e-9    0.000119472   0.0390564
 5.0  4.05463e-18   3.56679e-11   0.00133833
 6.0  3.17274e-42   1.96318e-23   6.07661e-7
 7.0  6.76289e-110  4.07871e-49   3.98102e-13

Well, so we’ve seen how to do one form of exact diagonalization that works, but doesn’t scale very well up to 100×100 matrices. So stay tuned for the Householder method, hopefully coming up soon.

Until then, happy computing 🙂

Monte Carlo Calculation of pi

Monte Carlo- Random Numbers to Improve Calculations

When one hears “Monte Carlo”, most people might think of something like this:

Monte Carlo

My Mom and I touring Monte Carlo, Monaco.

Monte Carlo, Monaco: known for extremely large amounts of money, car racing, no income taxes,and copious gambling.

In addition to Monaco, Europe, Las Vegas decided to host a Monte Carlo-themed casino as well. So during the Manhattan project, when the best minds in the United States were camped out in the New Mexican desert, they had plenty of inspiration from Las Vegas, and plenty of difficult problems to work on in the form of quantifying the inner workings of nuclei. Enrico Fermi first played with these ideas, but Stanislaw Ulam invented the modern Monte Carlo Markov Chain later.

At the same time, these scientists now had computers at their disposal. John von Neumann programmed Ulam’s algorithm onto ENIAC, Electronic Numerical Integrator and Computer, the very first electronic, general purpose computer, even though it did still run on vacuum tubes.

That still doesn’t answer, why do random numbers actually help us solve problems?

Imagine you are visiting a new city for the first time, maybe Monte Carlo. You only have a day or two, and you want to really get to know the city. You have two options for your visit

  • Hit the tourist sites you researched online
  • Wander around. Try and communicate with the locals. Find an out-of-the-way restaurant and sample food not tamed for foreigners. Watch people interact. Get lost.

Both are legitimate ways to see the city. But depending on what you want, you might choose a different option. The same goes for exploring physics problems. Sometimes you want to go in and study just everything you knew about beforehand, but sometimes you need to wander around, not hit everything, but get a better feeling for what everything might be like.

Buffon’s Needle: Calculation of π

Even back in the 18th century, Georges-Louis Leclerc, Comte de Buffon posed a problem in geometric probability. Nowdays, we use a slightly different version of that problem to calculate π and illustrate Monte Carlo simulations.

Suppose we have a square dartboard, and someone with really bad, completely random aim, even though he/she always at least hits inside the dartboard. We then inscribe a circle inside that dartboard. After an infinity number of hits, what is the ratio between hits in the circle, and hits in the square?

dartboard

Randomly thrown darts than can either be in the circle or not.

Onto the Code!

#Pkg.update()
#Pkg.add("PyPlot")
using PyPlot

We will generate our random numbers on the unit interval. Thus the radius in our circumstance is $.5$.

Write a function incircle(r2) such that if r2 is in the circle, it returns true, else, it returns false. We will use this with the julia function filter. Assume r2 is the radius squared, and already centered around the middle of the unit circle

function incircle(r2)
    if r2<.25
        return true
    else
        return false
    end
end
#The number of darts we will throw at the board.  We will see how accurate different numbers are
N=[10,25,50,75,100,250,500,750,1000,2500,5000,7500,10000];
# We will perform each number multiple times in order to calculate error bars
M=15;
πapprox=zeros(Float64,length(N),M);

for ii in 1:length(N)
    #initialize an array of proper size
    X=zeros(Float64,N[ii],2);
    for jj in 1:M

        #popular our array with random numbers on the unit interval
        rand!(X);

        #calculate their radius squared
        R2=(X[:,1]-.5).^2+(X[:,2]-.5).^2

        # 4*number in circle / total number
        πapprox[ii,jj]=4.*length(filter(incircle,R2))/N[ii];

    end
end
# Get our averages and standard deviations
πave=sum(πapprox,2)/M;
πstd=std(πapprox,2);

Analysis

So that was a nice, short little piece of code. Lets plot it now to see means.

title("Monte Carlo Estimation of π")
ylabel("π estimate")
xlabel("N points")
plot(N,π*ones(N));

for j in 1:M
    scatter(N,πapprox[:,j],marker="o",color="green");
end
ax=gca()
errorbar(N,π*ones(N),yerr=πstd,color="red",fmt="o")
ax[:set_xscale]("log");
ax[:set_xlim]([5,5*10^4]);

Result

Image result. I inverted the color scale though.

When we have fewer numbers of points, our estimates vary much more wildly, and much further from 3.1415926 .
But, at least, the guesses from our different runs all seem equally distributed around the correct value, so it seems we have no systematic error.

As we get up to $10^4$, our estimate starts getting much more accurate and consistent.

title("Dependence of Monte Carlo Error on Number of Points")
ylabel("standard deviation")
xlabel("N points")
semilogx(N,πstd,marker="o");

Result

Image result. Colors tweaked.

So what we guessed in the first plot about dispersion in estimate, we quantify here in this plot. When we only have 10 darts, the guesses vary by up to .3, but when we get down to 1,000 trials, we are starting to be consistent to .0002

title("Overall Averages")
xlabel("N steps")
ylabel("Average of 15 runs")
semilogx(N,π*ones(N));
semilogx(N,πave,marker="o");

Result

Image result. Colors tweaked.

Now lets just make a graphical representation of what we’ve been doing this whole time. Plot our points on unit square, and color the ones inside a circle a different color.

X=zeros(Float64,1000);
Y=zeros(Float64,1000);
rand!(X);
rand!(Y);
R2=(X-.5).^2+(Y-.5).^2;
Xc=[];
Yc=[]
for ii in 1:length(X)
    if R2[ii]<.25
        push!(Xc,X[ii]);
        push!(Yc,Y[ii]);
    end
end

title("The dartboard")
xlim(0,1)
ylim(0,1)
scatter(X,Y);
scatter(Xc,Yc,color="red");

Result

Result. Colors tweaked

That’s all folks!
Now here’s a picture of some pie to congratulate you on calculating π.

pie

By Scott Bauer, USDA ARS – This image was released by the Agricultural Research Service, the research agency of the United States Department of Agriculture, with the ID K7252-47 (next).This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more information.English | français | македонски | +/−, Public Domain, https://commons.wikimedia.org/w/index.php?curid=264106