Tag Archives: Prerequisites

Monte Carlo Ferromagnet

Post Prerequisites: Monte Carlo Calculation of pi, Monte Carlo Markov Chain

Prerequisites: Statistical Mechanics

The physics

What process turns liquid water into a solid? As temperature lowers, materials transition from a regime where the desire to maximize entropy dominates to desiring to minimize energy. In transitions like liquid-solid, the energy arises from atom-atom forces. This problem falls into a category called first order phase transitions, which are difficult to study.

Instead of looking at the liquid-solid problem to understand phase transitions, we will look at a magnetic material undergoing a second order phase transition. In our approximation of the problem, the atoms exist at fixed positions on a lattice, and interact with their neighbor according to the following Hamiltonian,
\begin{equation}
{\cal H} = -J \sum_{<i,j>} S^z_i S_j^z
\end{equation}
, which is basically just the energy. This Ising Model has nearest neighbors interacting, and each spin variablesolely points in the $\pm z$ direction.

At a given temperature $T$, inverse temperature $\beta=1/T$, $k_b=1$, the occupancy of a given configuration $c_i$ follows the Maxwell-Boltzmann Probability Distribution,
\begin{equation}
P(c_i)=\frac{\mathrm{e}^{-\beta E(c_i)}}{\sum\limits_j \mathrm{e}^{-\beta E(c_j)}}
\end{equation}
We need to determine observables given this probability distribution.

The Numerics

In the post on the Markov Chain, each state was a location on our grid. Now our state is one configuration of all our $N$ spins. That means for an Ising spin we have $2^N$ different configurations. Yeah… We aren’t going to be exploring all of those.

So how do we get from one state to the next?

First, we choose a spin that we could flip. This potential spin is chosen randomly for convenience’s sake.

Then we use the Metropolis-Hastings Algorithm to decide whether or no to flip the spin and generate a new configuration.

We split the solution into two cases based on $\alpha = \frac{\pi_i}{\pi_j}= \frac{P(c_i)}{P(c_j)}$:

  • If $\alpha > 1$ then always accept the transition, $p_{i \to j} = 1$
  • If $\alpha < 1$, accept the transition with probability $\alpha = p_{i \to j}$

Does this obey detailed balance?

From the Monte Carlo Markov Chain post, we take our equation for detailed balance

\begin{equation}
\frac{p_{i \to j}}{ p_{j \to i}} = \frac{\pi_i}{\pi_j}
\end{equation}

How about ergodicity? We can reach any configuration by flipping spins.

For the Ising Ferromagnet, our $\alpha$ is
\begin{equation}
\alpha= \frac{Z}{Z} \frac{\mathrm{e}^{-\beta E_i}}{\mathrm{e}^{-\beta E_j}}
= \mathrm{e}^{\beta \left(E_j – E_i \right)}
\end{equation}
which is simply a function of difference in energy between two states. Therefore we don’t need to know the absolute energy at each point in time, just how the spin flip changes its local environment.

Tunable Parameters

I will say a little bit about the tunable parameters here, but Monte Carlo simulations are as much an art as a science, so I leave it up to you to play with the numbers and build intuition for what works.

Temperature

On the left I am performing a low temperature search of a couch store. I found a minimum, and even though I got stuck there, I thoroughly got to know what that minimum had to offer. On the right though, I’m performing a high temperature search of the Alexander Hills, Death Valley, California in pursuit of Neoproterzoic carbonates and dimictites. I needed to be able to explore a lot more of the environment.

While in a physical system like a magnet, temperature has a physical meaning, we can create other types of situations, like optimization or approximating a probability distribution, where we use temperature to describe how stuck we are to a particular minimum. This intuition also plays a role in interpreting the results of of our simulation.

Temperature can also provide other complications when we are studying critical phenomena, phase transitions. Near the critical temperature, computations get much more difficult. Elaboration in later post.

Size of Lattice

  • Larger lattice gives more precise results.
  • Larger lattice takes more memory and time.
  • Finite Size effects, to be disscussed later, display some interesting physics.

Coupling Constant J

Usually normalized out anyway…

Number of time steps

Each Monte Carlo time step is one complete sweep of the lattice, $N$ random flip attempts. The more time steps, the more accurate the results, but the longer you have to wait.

When to Measure

Though a time step does include $N$ spin flips, that doesn’t mean we have actually moved to a truly new configuration. If we want to randomly sample our configuration space, we need to wait a bit longer to sample. Too short a time, and our measurements are coupled and not accurate. Too long, and we are wasting computations. In a later post, I will look at the autocorrelation time, which is a good way to figure our if your measurement time is good.

Other potential Options

Some less important things: periodic versus open boundary conditions, shape and symmetries of simulation cell.

using PyPlot;
push!(LOAD_PATH,".")
using Lattices;

Instead of going into calculating all the lattice parameters again, we will use a class I define in the file Lattices.jl . This class contains

Lattice Types

  • Chain
  • Square
  • Honeycomb
    You can edit the file to make your own types.

Once a lattice is created, it contains Members of Type:

  • name, a string
  • l, length in number of unit cells
  • dim, dimension of lattice
  • a, array containing the basis vectors by which positions are generated
  • unit, array of positions inside a single unit
  • N, number of total sites
  • X, array of positions
  • nnei, number of nearest neighbors
  • neigh, Array of nearest neighbors [i][j], where i is site and j is 1:nnei

Today, I will just look at the square lattice, since that indicates much of the standard phase transition properties. Some of the lattices I have shown (kagome, triangular, …) are special frustrated lattices, and thus will behave very wierdly in this situation.

## Define l here
l=50;

lt=MakeLattice("Square",l);
S=ones(Int8,l,l);  #Our spins
dt=1/(lt.N);
# The energy contribution of just one site
function dE(i::Int)
    Eii=0;
    for j in 1:lt.nnei
        Eii+=S[lt.neigh[i,j]];
    end
    Eii*=-J*S[i];  # we are computing J sz_i sz_j for one i
    return Eii;
end
# The energy of the entire lattice
function E()
    Evar=0;
    for k in 1:lt.N
        Evar+=.5*dE(k);
    end
    return Evar;
end
# The magnetization of the entire lattice
function M()
    Mvar=0;
    for k in 1:lt.N
        Mvar+=S[k];
    end
    return Mvar;
end
"defined functions"

Adjustable Parameters

I have set up the simulation so that you can perform two different things. For one, you can set video=true and t to a small variable. Then in a new window you see what the configuration looks like each time you measure.

Or you can set video=false and t to a large variable, and actually measure the statistics for the system over a bunch of configurations.

beta=.7;
J=1;
t=100000;
video=false;
nskip=10;   # don't measure every sweep= better decorrelation
"Parameters set"
nmeas=Int64(t/nskip); # how many times we will measure
Ma=Array{Int32}(nmeas); # our magnetization measurements
Ea=Array{Int32}(nmeas); # our energy measurements
"done"
tm=1; #Our measurement time step
pygui(true)
for ti in 1:t
    for j in 1:lt.N
        i = rand(1:lt.N); #Choosing a random site
        de=dE(i);
        if(de>0 || rand()<exp(2*beta*de) )
            S[i]=-S[i]; #Switch the sign
        end
    end
    if isapprox(mod(ti,nskip),0)
        Ma[tm]=M();
        Ea[tm]=E();
        tm+=1;

        if video==true
            pygui(true)
            pcolor(S,cmap="winter")
            magnow=round(M()/lt.N,3)
            title("t: $ti  M: $magnow ")
            draw()
            sleep(.5)
            #nm=dec(ti,4)
            #savefig("Magnetimages2/pic_l50_bp7_$nm.png")
        end
    end
end
Mave=mean(Ma/lt.N);
Mstd=std(Ma/lt.N);
Eave=mean(Ea/lt.N);
Estd=std(Ea/lt.N);
Mave, Mstd
pygui(false)
title("Magnetization versus Time, Beta=$beta")
xlabel("Monte Carlo Steps")
ylabel("Average Magnetization")
plot(collect(1:10:5000),Ma[1:500]/lt.N)
annotate(" White Noise = Well Mixing",
xy=[500, Mave ],
xytext=[1000, Mave-.0125],
    xycoords="data",
    arrowprops=Dict("facecolor"=>"green"))
pygui(false)
plt[:hist](Ma/lt.N,bins=30,normed=true,label="Samples Normed");
x=collect(Mave-Mstd*3:.001:Mave+Mstd*3)
gaussian=1/(Mstd*sqrt(2*pi))*exp(-(x-Mave).^2/(2*Mstd^2));
plot(x,gaussian,linewidth=5,label="Gaussian Fit")
title("Magnetization Sample Distribution beta=$beta")
xlabel("Mangetization")
ylabel("Normed Counts")
legend(loc="upper right")
#savefig("Ferromagnet/maghist_bp3.png")
pygui(false)
title("Energy versus Time, Beta=$beta")
xlabel("Monte Carlo Steps")
ylabel("Average Energy")
plot(collect(1:10:5000),Ea[1:500]/lt.N)
annotate(" White Noise = Well Mixing",
    xy=[500, Eave ],
xytext=[1000, Eave-.03],
   xycoords="data",
   arrowprops=Dict("facecolor"=>"green"))
pygui(false)
plt[:hist](Ea/lt.N,bins=30,normed=true,label="Samples Normed");
x=collect(Eave-3*Estd:.001:Eave+3*Estd)
gaussian=1/(Estd*sqrt(2*pi))*exp(-(x-Eave).^2/(2*Estd^2));
plot(x,gaussian,linewidth=5,label="Gaussian Fit")
title("Energy Histogram, beta=$beta")
xlabel("Energy")
ylabel("Normed Counts")
#savefig("Ferromagnet/Ehist_bp3.png")

Example Results

So here are some example results I got.

Paramagnet

States of a Paramagnet at $\beta=0.2$

Samples over Time

Paramagnet Magnetization Samples. A properly tuned simulations should be showing white noise without correlations or getting stuck in particular areas.

Magnetization Histogram

Magnetization Histogram

Energy Histogram

Energy Histogram

Magnet

States of a Paramagnet at $\beta=0.7$

Magnet Histogram

Histogram of a magnetization at $\beta=0.7$

Energy Histogram

Histogram of energy at $\beta=0.7$

To be covered later:

This is an extremely rich problem. Since this post seems long enough for me already, I’ll leave these to exercises for you right now, or you can wait until I hold your hand through them.

  • Plot magnetization and energy as a function of temperature
  • What’s the transition temperature?
  • How does the dispersion change as a function of temperature? Use that to calculate specific heat and susceptibility
  • How do results change with system size?
  • Change dimension?
  • Put it on a different lattice. BE CAREFUL of lattices like triangular, checkerboard, pyrochlore, …
  • Ferromagnetic versus Antiferromagnetic coupling
  • Autocorrelation function
  • Structure Factorm, Fourier Transform

Atomic Orbitals

Prerequiresites: Quantum Mechanics course

Electrons around a nucleus. Do they look like little well behaved planets orbiting a sun?

NOPE!

We get spread out blobs in special little patterns called orbitals. Here, we will look at their shapes and properties a bit. Today we will look at graphs in 1D and 2D, but the next post, Atomic Orbitals Pt. 2, uses a fancy, but slightly unstable plotting package, GLVisualize to generate some 3D plots.

The Hamiltonian for our problem is:

begin{equation}
{cal H}Psi(mathbf{x}) =left[ -frac{hbar}{2 m} nabla^2 – frac{Z e^2}{4 pi epsilon_0 r}right]Psi(mathbf{x}) = E Psi(mathbf{x})
end{equation}
with
begin{equation}
nabla^2= frac{1}{r^2}frac{partial}{partial r} left(
r^2 frac{partial}{partial r}
right)+
frac{1}{r^2 sin theta} frac{partial}{partial theta} left(
sin theta frac{partial}{partial theta}
right)+
frac{1}{r^2 sin^2 theta} frac{partial^2}{partial phi^2}
end{equation}

To solve this problem, we begin by guessing a solution with separated radial and angular variables,
begin{equation}
Psi(mathbf{x}) = R(r) Theta ( theta,phi)
end{equation}

begin{equation}
frac{E r^2 R(r)}{2r R^{prime}(r) + r^2 R^{prime prime}(r)}=
frac{left( frac{1}{sin theta} frac{partial}{partial theta} left(
sin theta frac{partial Theta(theta,phi)}{partial theta}
right)+
frac{1}{sin^2 theta} frac{partial^2 Theta(theta,phi)}{partial phi^2}right) }{Theta( theta, phi)}
=C
end{equation}

Instead of going into the precise mechanisms of solving those two separate equations here, trust for now that they follow standard special functions, the associated Legendre polynomial and the generalized Laguerre polynomial. Try a standard quantum mechanics textbook for more information about this.

begin{equation}
Y^m_l(θ,ϕ) = (-1)^m e^{i m phi} P^m_l (cos(θ))
end{equation}
where $P^m_l (cos (theta))$ is the associated Legendre polynomial.

begin{equation}
R^{n,l} ( rho ) = rho ^l e^{- rho /2} L^{2 l+1}_{n-l-1} ( rho )
end{equation}
where $L^{2 l+1}_{n-l-1}(rho)$ is the generalized Laguerre polynomial.

begin{equation}
rho=frac{2r}{n a_0}
end{equation}

begin{equation}
N=sqrt{left(frac{2}{n}right)^3 frac{(n-l-1)}{2n(n+l)!}}
end{equation}

#Pkg.update();
#Pkg.add("GSL");
#Pkg.add("PyPlot");
using GSL;    #GSL holds the special functions
using PyPlot;

Cell to Evaluate

What’s below is a bunch of definitions that makes our calculations easier later on. Here I utilize the GNU scientific library, GSL imported above, to calculate the special functions.

Programming Tip!

Even though it’s not necessary, specifying the type of inputs to a function through m::Int helps prevent improper inputs and allows the compiler to perform additional optimizations. Julia also implements abstract types, so we don’t have to specify the exact type of Int. Real allows a numerical, non-complex type.

Type Greek characters in Jupyter notebooks via LaTeX syntax, e.g. alpha+tab

The function Orbital throws DomainError() when l or m do not obey their bounds. Julia supports a wide variety of easy to use error messages.

a0=1; #for convenience, or 5.2917721092(17)×10−11 m

# The unitless radial coordinate
ρ(r,n)=2r/(n*a0);

#The θ dependence
function Pmlh(m::Int,l::Int,θ::Real)
    return (-1.0)^m *sf_legendre_Plm(l,m,cos(θ));
end

#The θ and ϕ dependence
function Yml(m::Int,l::Int,θ::Real,ϕ::Real)
    return  (-1.0)^m*sf_legendre_Plm(l,m,cos(θ))*e^(im*m*ϕ)
end

#The Radial dependence
function R(n::Int,l::Int,ρ::Real)
    if isapprox(ρ,0)
        ρ=.01
    end
     return sf_laguerre_n(n-l-1,2*l+1,ρ)*e^(-ρ/2)*ρ^l
end

#A normalization: This is dependent on the choice of polynomial representation
function norm(n::Int,l::Int)
    return sqrt((2/n)^3 * factorial(n-l-1)/(2n*factorial(n+l)))
end

#Generates an Orbital function of (r,θ,ϕ) for a specified n,l,m.
function Orbital(n::Int,l::Int,m::Int)
    if l>n    # we make sure l and m are within proper bounds
        throw(DomainError())
    end
    if abs(m)>l
        throw(DomainError())
    end
    psi(ρ,θ,ϕ)=norm(n, l)*R(n,l,ρ)*Yml(m,l,θ,ϕ);
    return psi
end

#We will calculate is spherical coordinates, but plot in Cartesian, so we need this array conversion
function SphtoCart(r::Array,θ::Array,ϕ::Array)
    x=r.*sin(θ).*cos(ϕ);
    y=r.*sin(θ).*sin(ϕ);
    z=r.*cos(θ);
    return x,y,z;
end

function CarttoSph(x::Array,y::Array,z::Array)
    r=sqrt(x.^2+y.^2+z.^2);
    θ=acos(z./r);
    ϕ=atan(y./x);
    return r,θ,ϕ;
end

"Defined Helper Functions"

Parameters

Grid parameters:
You might need to change rmax to be able to view higher $n$ orbitals.

Remember that
begin{equation}
0<n ;;;;; ;;;; 0 leq l < n ;;;;; ;;;; -l leq m leq l
;;;;; ;;;; n,l,m in {cal Z}
end{equation}

# Grid Parameters
rmin=.05
rmax=10
Nr=100 #Sampling frequency
Nθ=100
Nϕ=100

# Choose which Orbital to look at
n=3;
l=1;
m=0;
"Defined parameters"
#Linear Array of spherical coordinates
r=collect(linspace(rmin,rmax,Nr));
ϕ=collect(linspace(0,2π,Nθ));
θ=collect(linspace(0,π,Nϕ));
#3D arrays of spherical coordinates, in order r,θ,ϕ
ra=repeat(r,outer=[1,Nθ,Nϕ]);
θa=repeat(transpose(θ),outer=[Nr,1,Nϕ]);
ϕa=repeat(reshape(ϕ,1,1,Nϕ),outer=[Nr,Nθ,1]);

x,y,z=SphtoCart(ra,θa,ϕa);

Though I could create a wrapped up function with Orbital(n,l,m) and evaluate that at each point, the below evaluation takes advantage of the separability of the solution with respect to spherical dimensions. The special functions, especially for higher modes, take time to calculate, and the fewer calls to GSL, the faster the code will run. Therefore, this implementation copies over radial and angular responses.

Ψ=zeros(Float64,Nr,Nϕ,Nθ)
θd=Int64(round(Nθ/2))  ## gives approximately the equator.  Will be useful later

p1=Pmlh(m,l,θ[1]);
p2=exp(im*m*ϕ[1]);
for i in 1:Nr
    Ψ[i,1,1]=norm(n,l)*R(n,l,ρ(r[i],n))*p1*p2;
end

for j in 1:Nθ
    Ψ[:,j,1]=Ψ[:,1,1]*Pmlh(m,l,θ[j])/p1;
end

for k in 1:Nϕ
    Ψ[:,:,k]=Ψ[:,:,1]*exp(im*m*ϕ[k])/p2;
end
pygui(false)
xlabel("θ")
ylabel("Ψ")
title("Wavefunction for n= $n ,l= $l ,m= $m ")

annotate("l= $l Angular Node",
xy=[π/2;0],
xytext=[π/2+.1;.02],
xycoords="data",
arrowprops=Dict("facecolor"=>"black"))

plot(θ,zeros(θ))
plot(θ,reshape(Ψ[50,:,1],100)) #reshape makes Ψ 1D

2p Angle Slice

A slice along the θ plane showing an angular node for the 2p orbital.

pygui(false)
xlabel("r")
ylabel("Ψ")
title("Wavefunction for n= $n ,l= $l ,m= $m ")

plot(r,zeros(r))
plot(r,reshape(Ψ[:,50,1],100)) #reshape makes Ψ 1D

3p Radial Slice

A slice along the radial plane showing a radial node in the 3p orbital.

#rap=squeeze(ra[:,:,50],3)
#θap=squeeze(θa[:,:,50],3)
#ϕap=squeeze(ϕa[:,:,50],3)
#Ψp=squeeze(Ψ[:,:,50],3)
rap=ra[:,:,50]
θap=θa[:,:,50]
ϕap=ϕa[:,:,50]
Ψp=Ψ[:,:,50]
xp,yp,zp=SphtoCart(rap,θap,ϕap);
pygui(false)
xlabel("x")
ylabel("z")
title("ϕ-slice of Ψ for n=$n, l=$l, m=$m")
pcolor(xp[:,:],zp[:,:],Ψp[:,:],cmap="coolwarm")
colorbar()

3p in 2d

Slice of a 3p orbital in the x and z plane.

3dz2 in 2d

Slice of a 3dz2 orbital in the x and z plane.

Don’t forget to check out Atomic Orbitals Pt. 2!

Quantum Harmonic Oscillator

Prerequiresites: Quantum Mechanics course

Slinkies. They started out as toys. I still have one to play with on my desk.

Rubber bands What was once something useful, is now a wonderful projectile weapon.

Swings I still love them, but people seem to not make them in adult sizes for some reason.

A person’s perception of these objects starts to change as they enter their first physics class. Even in that beginning class of classical mechanics, the problems are filled with harmonic oscillators, like slinkies, rubber bands, or swings, which exert a force proportional to their displacement

and therefore a quadratic potential

This is all extremely fun and useful in the classical regime, but we add Quantum Mechanics to the mix, and LOW AND BEHOLD! we have one of the few exactly solvable models in Quantum Mechanics. Moreso, this solution demonstrates some extremely important properties of quantum mechanical systems.

The Hamiltonian

The Solution

Today, I just intend to present the form of the solution, calculate this equation numerically, and visualize the results. If you wish to know how the equation is derived, you can find a standard quantum mechanics textbook, or stay tuned till I manage to write it up.

Physicists’ Hermite Polynomials

Note: These are not the same as the “probabilists’ Hermite Polynomial”. The two functions differ by scaling factors.

Physicists’ Hermite polynomials are defined as eigenfunctions for the differential equation

I leave it as an exercise to the reader to:

  • demonstrate orthogonality with respect to the measure $e^{-x^2}$, i.e.

  • demonstrate completeness. This means we can describe every function by a linear combination of Hermite polynomials, provided it is suitably well behaved.

Though a formula exists or calculating a function at $n$ directly, the most efficient method at low $n$ for calculating polynomials relies on recurrence relationships. These recurrence relationships will also be quite handy if you ever need to show orthogonality, or expectation values.

Recurrence Relations

#Pkg.update();
#Pkg.add("PyPlot");
#Pkg.update()
#Pkg.add("Roots")
using Roots;
using PyPlot;

Programming Tip!

Since Hermite polynomials are generated recursively, I wanted to generate and save all the functions up to a designated value at once. In order to do so, I created an array, whose values are anonymous functions.

function GenerateHermite(n)
    Hermite=Function[]

    push!(Hermite,x->1);
    push!(Hermite,x->2*x);

    for ni in 3:n
        push!(Hermite,x->2.*x.*Hermite[ni-1](x).-2.*n.*Hermite[ni-2](x))
    end
    return Hermite
end

Let’s generate some Hermite polynomials and look at them.
Make sure you don’t call a Hermite you haven’t generated yet!

Hermite=GenerateHermite(5)

Programming Tip!

Since the Hermite polynomials, and the wavefunctions after them, are composed of anonymous functions, we need to use map(f,x) in order to map the function f onto the array x. Otherwise our polynomials only work on single values.

x=collect(-2:.01:2);
for j in 1:5
    plot(x,map(Hermite[j],x),label="H_$j (x)")
end
legend()
ylim(-50,50)

Hermite Polynomials

First few Hermite Polynomials

# Lets make our life easy and set all units to 1
m=1
ω=1
ħ=1

#Finally, we define Ψ
Ψ(n,x)=1/sqrt(factorial(n)*2^n)*(m*ω/(ħ*π))^(1/4)*exp(-m*ω*x^2/(2*ħ))*Hermite[n](sqrt(m*ω/ħ)*x)

Finding Zeros

The eigenvalue maps to the number of zeros in the wavefunction. Below, I use Julia’s roots package to indentify roots on the interval from -3 to 3.

zeds=Array{Array{Float64}}(1)
zeds[1]=[] #ground state has no zeros
for j in 2:4
    push!(zeds,fzeros(y->Ψ(j,y),-3,3))
end
# AHHHHH! So Much code!
# Don't worry; it's all just plotting
x=collect(-3:.01:3)  #Set some good axes

for j in 1:4    #how many do you want to view?
    plot(x,map(y->Ψ(j,y),x)+j-1,label="| $j >")
    plot(x,(j-1)*ones(x),color="black")
    scatter(zeds[j],(j-1)*ones(zeds[j]),marker="o",s=40)
end
plot(x,.5*m*ω^2*x.^2,linestyle="--",label="Potential")

scatter([],[],marker="o",s=40,label="Zeros")
xlabel("x")
ylabel("Ψ+n")
title("Eigenstates of a Harmonic Osscilator")
legend()
xlim(-3,3);
ylim(-.5,4.5);

Example Result

Eigenstates

Eigenstates of the Quantum Harmonic Osscillator

More to come

This barely scratched the surface into the richness that can be seen in the quantum harmonic oscillator. Here, we have developed a way for calculating the functions, and visualized the results. Stay tuned to hear about ground state energy, ladder operators, and atomic trapping.