STAC-A2 Benchmark

By: Julia Computing, Inc.

Re-posted from:

Julia Computing has recently completed an initial implementation of the STAC-A2 benchmark. STAC-A2 is an industry standard benchmark suite for testing compute-intensive analytic workloads for options pricing and risk management. The primary application defined in the STAC-A2 benchmark consists of the computation of various “Greeks”, derivatives of the calculus variety, for a particular type of exotic option, derivatives of the financial variety. The main benchmark includes a Monte Carlo simulation of the underlying security using the Andersen QE formulation of the Heston stochastic volatility model followed by American-exercise pricing of the option using the Longstaff and Schwartz method. The particular option being evaluated is a lookback, best-of option pricing a basket of securities. The benchmark tests scaling of an implementation by discretizing the simulation over a cube of time-steps, Monte Carlo paths, and number of assets in the current basket.

The initial Julia implementation of the STAC-A2 benchmark is a single process, single threaded implementation of the benchmark using Julia 0.4.5 on an Amazon Web Services (AWS) Elastic Compute Cloud (EC2) r3.2xlarge server. This initial Julia implementation mirrors the sequential algorithms of the example STAC-A2 reference model. Access is available to the unaudited report, configuration disclosure, and code for the current Julia implementation to STAC members with a premium subscription.

Monte Carlo Ferromagnet

By: Metals, Magnets, and Miscellaneous Materials

Re-posted from:

Post Prerequisites: Monte Carlo Calculation of pi, Monte Carlo Markov Chain

Prerequisites: Statistical Mechanics

The physics

What process turns liquid water into a solid? As temperature lowers, materials transition from a regime where the desire to maximize entropy dominates to desiring to minimize energy. In transitions like liquid-solid, the energy arises from atom-atom forces. This problem falls into a category called first order phase transitions, which are difficult to study.

Instead of looking at the liquid-solid problem to understand phase transitions, we will look at a magnetic material undergoing a second order phase transition. In our approximation of the problem, the atoms exist at fixed positions on a lattice, and interact with their neighbor according to the following Hamiltonian,
{\cal H} = -J \sum_{<i,j>} S^z_i S_j^z
, which is basically just the energy. This Ising Model has nearest neighbors interacting, and each spin variablesolely points in the $\pm z$ direction.

At a given temperature $T$, inverse temperature $\beta=1/T$, $k_b=1$, the occupancy of a given configuration $c_i$ follows the Maxwell-Boltzmann Probability Distribution,
P(c_i)=\frac{\mathrm{e}^{-\beta E(c_i)}}{\sum\limits_j \mathrm{e}^{-\beta E(c_j)}}
We need to determine observables given this probability distribution.

The Numerics

In the post on the Markov Chain, each state was a location on our grid. Now our state is one configuration of all our $N$ spins. That means for an Ising spin we have $2^N$ different configurations. Yeah… We aren’t going to be exploring all of those.

So how do we get from one state to the next?

First, we choose a spin that we could flip. This potential spin is chosen randomly for convenience’s sake.

Then we use the Metropolis-Hastings Algorithm to decide whether or no to flip the spin and generate a new configuration.

We split the solution into two cases based on $\alpha = \frac{\pi_i}{\pi_j}= \frac{P(c_i)}{P(c_j)}$:

  • If $\alpha > 1$ then always accept the transition, $p_{i \to j} = 1$
  • If $\alpha < 1$, accept the transition with probability $\alpha = p_{i \to j}$

Does this obey detailed balance?

From the Monte Carlo Markov Chain post, we take our equation for detailed balance

\frac{p_{i \to j}}{ p_{j \to i}} = \frac{\pi_i}{\pi_j}

How about ergodicity? We can reach any configuration by flipping spins.

For the Ising Ferromagnet, our $\alpha$ is
\alpha= \frac{Z}{Z} \frac{\mathrm{e}^{-\beta E_i}}{\mathrm{e}^{-\beta E_j}}
= \mathrm{e}^{\beta \left(E_j – E_i \right)}
which is simply a function of difference in energy between two states. Therefore we don’t need to know the absolute energy at each point in time, just how the spin flip changes its local environment.

Tunable Parameters

I will say a little bit about the tunable parameters here, but Monte Carlo simulations are as much an art as a science, so I leave it up to you to play with the numbers and build intuition for what works.


On the left I am performing a low temperature search of a couch store. I found a minimum, and even though I got stuck there, I thoroughly got to know what that minimum had to offer. On the right though, I’m performing a high temperature search of the Alexander Hills, Death Valley, California in pursuit of Neoproterzoic carbonates and dimictites. I needed to be able to explore a lot more of the environment.

While in a physical system like a magnet, temperature has a physical meaning, we can create other types of situations, like optimization or approximating a probability distribution, where we use temperature to describe how stuck we are to a particular minimum. This intuition also plays a role in interpreting the results of of our simulation.

Temperature can also provide other complications when we are studying critical phenomena, phase transitions. Near the critical temperature, computations get much more difficult. Elaboration in later post.

Size of Lattice

  • Larger lattice gives more precise results.
  • Larger lattice takes more memory and time.
  • Finite Size effects, to be disscussed later, display some interesting physics.

Coupling Constant J

Usually normalized out anyway…

Number of time steps

Each Monte Carlo time step is one complete sweep of the lattice, $N$ random flip attempts. The more time steps, the more accurate the results, but the longer you have to wait.

When to Measure

Though a time step does include $N$ spin flips, that doesn’t mean we have actually moved to a truly new configuration. If we want to randomly sample our configuration space, we need to wait a bit longer to sample. Too short a time, and our measurements are coupled and not accurate. Too long, and we are wasting computations. In a later post, I will look at the autocorrelation time, which is a good way to figure our if your measurement time is good.

Other potential Options

Some less important things: periodic versus open boundary conditions, shape and symmetries of simulation cell.

using PyPlot;
using Lattices;

Instead of going into calculating all the lattice parameters again, we will use a class I define in the file Lattices.jl . This class contains

Lattice Types

  • Chain
  • Square
  • Honeycomb
    You can edit the file to make your own types.

Once a lattice is created, it contains Members of Type:

  • name, a string
  • l, length in number of unit cells
  • dim, dimension of lattice
  • a, array containing the basis vectors by which positions are generated
  • unit, array of positions inside a single unit
  • N, number of total sites
  • X, array of positions
  • nnei, number of nearest neighbors
  • neigh, Array of nearest neighbors [i][j], where i is site and j is 1:nnei

Today, I will just look at the square lattice, since that indicates much of the standard phase transition properties. Some of the lattices I have shown (kagome, triangular, …) are special frustrated lattices, and thus will behave very wierdly in this situation.

## Define l here

S=ones(Int8,l,l);  #Our spins
# The energy contribution of just one site
function dE(i::Int)
    for j in 1:lt.nnei
    Eii*=-J*S[i];  # we are computing J sz_i sz_j for one i
    return Eii;
# The energy of the entire lattice
function E()
    for k in 1:lt.N
    return Evar;
# The magnetization of the entire lattice
function M()
    for k in 1:lt.N
    return Mvar;
"defined functions"

Adjustable Parameters

I have set up the simulation so that you can perform two different things. For one, you can set video=true and t to a small variable. Then in a new window you see what the configuration looks like each time you measure.

Or you can set video=false and t to a large variable, and actually measure the statistics for the system over a bunch of configurations.

nskip=10;   # don't measure every sweep= better decorrelation
"Parameters set"
nmeas=Int64(t/nskip); # how many times we will measure
Ma=Array{Int32}(nmeas); # our magnetization measurements
Ea=Array{Int32}(nmeas); # our energy measurements
tm=1; #Our measurement time step
for ti in 1:t
    for j in 1:lt.N
        i = rand(1:lt.N); #Choosing a random site
        if(de>0 || rand()<exp(2*beta*de) )
            S[i]=-S[i]; #Switch the sign
    if isapprox(mod(ti,nskip),0)

        if video==true
            title("t: $ti  M: $magnow ")
Mave, Mstd
title("Magnetization versus Time, Beta=$beta")
xlabel("Monte Carlo Steps")
ylabel("Average Magnetization")
annotate(" White Noise = Well Mixing",
xy=[500, Mave ],
xytext=[1000, Mave-.0125],
plt[:hist](Ma/lt.N,bins=30,normed=true,label="Samples Normed");
plot(x,gaussian,linewidth=5,label="Gaussian Fit")
title("Magnetization Sample Distribution beta=$beta")
ylabel("Normed Counts")
legend(loc="upper right")
title("Energy versus Time, Beta=$beta")
xlabel("Monte Carlo Steps")
ylabel("Average Energy")
annotate(" White Noise = Well Mixing",
    xy=[500, Eave ],
xytext=[1000, Eave-.03],
plt[:hist](Ea/lt.N,bins=30,normed=true,label="Samples Normed");
plot(x,gaussian,linewidth=5,label="Gaussian Fit")
title("Energy Histogram, beta=$beta")
ylabel("Normed Counts")

Example Results

So here are some example results I got.


States of a Paramagnet at $\beta=0.2$

Samples over Time

Paramagnet Magnetization Samples. A properly tuned simulations should be showing white noise without correlations or getting stuck in particular areas.

Magnetization Histogram

Magnetization Histogram

Energy Histogram

Energy Histogram


States of a Paramagnet at $\beta=0.7$

Magnet Histogram

Histogram of a magnetization at $\beta=0.7$

Energy Histogram

Histogram of energy at $\beta=0.7$

To be covered later:

This is an extremely rich problem. Since this post seems long enough for me already, I’ll leave these to exercises for you right now, or you can wait until I hold your hand through them.

  • Plot magnetization and energy as a function of temperature
  • What’s the transition temperature?
  • How does the dispersion change as a function of temperature? Use that to calculate specific heat and susceptibility
  • How do results change with system size?
  • Change dimension?
  • Put it on a different lattice. BE CAREFUL of lattices like triangular, checkerboard, pyrochlore, …
  • Ferromagnetic versus Antiferromagnetic coupling
  • Autocorrelation function
  • Structure Factorm, Fourier Transform

Finalizing Your Julia Package: Documentation, Testing, Coverage, and Publishing

By: Christopher Rackauckas

Re-posted from:

In this tutorial we will go through the steps to finalizing a Julia package. At this point you have some functionality you wish to share with the world… what do you do? You want to have documentation, code testing each time you commit (on all the major OSs), a nice badge which shows how much of the code is tested, and put it into metadata so that people could install your package just by typing Pkg.add(“Pkgname”). How do you do all of this?

Note: At anytime feel free to checkout my package repository DifferentialEquations.jl which should be a working example.

Generate the Package and Get it on Github

First you will want to generate your package and get it on Github repository. Make sure you have a Github account, and then setup the environment variables in the git shell:

$ git config --global "FULL NAME"
$ git config --global "EMAIL"
$ git config --global github.user "USERNAME"

Now you can generate your package via


For the license, I tend to use MIT since it is quite permissive. This will tell you where your package was generated (usually in your Julia library folder). Take your function files and paste them into the /src folder in the package. In your /src folder, you will have a file PkgName.jl. This file defines your module. Generally you will want it to look something like this:

module PkgName
#Import your packages
using Pkg1, Pkg2, Pkg3
import Base: func1 #Any function you add dispatches to need to be imported directly
abstract AbType #Define abstract types before the types they abstract!
include("functionsForPackage.jl") #Include all the functionality
export coolfunc, coolfunc2 #Export the functions you want users to use

Now try on your computer using PkgName. Try your functions out. Once this is all working, this means you have your package working locally.

Write the Documentation

For documentation, it’s recommended to use Documenter.jl. The other packages, Docile.jl and Lexicon.jl, have been deprecated in favor of Documenter.jl. Getting your documentation to generate starts with writing docstrings. Docstrings are strings in your source code which are used for generating documentation. It is best to use docstrings because these will also show up in the REPL, i.e. if someone types ?coolfunc, your docstrings will show here.

To do this, you just add strings before your function definitions. For example,

"Defines a cool function. Returns some stuff"
function coolFunc()
Defines an even cooler function. ``LaTeX``.
### Returns
 * Markdown works in here
function coolFunc2()

Once you have your docstrings together, you can use them to generate your documentation. Install Documenter.jl in your local repository by cloning the repository with Pkg.clone(“PkgLocation”). Make a new folder in the top directory of your package named /docs. In this directory, make a file make.jl and add the following lines to the file:

using Documenter, PkgName
deploydocs(deps   = Deps.pip("mkdocs", "python-markdown-math"),
    repo = "",
    julia  = "0.4.5",
    osname = "linux")

Don’t forget to change PkgName and repo to match your project. Now make a folder in this directory named /src (i.e. it’s /docs/src). Make a file named This will be the index of your documentation. You’ll want to make it something like this:

#Documentation Title
Some text describing the package.
## Subtitle
More text
## Tutorials
Pages = [
Depth = 2
## Another Section
Pages = [
Depth = 2
## Index

At the top we explain the page. The next part adds 3 pages to a “Tutorial” section of the documentation, and then 3 pages to a “Another Section” section of the documentation. Now inside /docs/src make the directories tutorial and sec2, and add the appropriate pages,, These are the Markdown files that the documentation will use to build the pages.

To build a page, you can do something like as follows:

# Title
Some text describing this section
## Subtitle

What this does is it builds the page with your added text/titles on the top, and then puts your docstrings in below. Thus most of the information should be in your docstrings, with quick introductions before each page. So if your docstrings are pretty complete, this will be quick.

Build the Documentation

Now we will build the documentation. cd into the /docs folder and run make.jl. If that’s successful, then you will have a folder /docs/build. This contains markdown files where the docstrings have been added. To turn this into a documentation, first install mkdocs. Now add the following file to your /docs folder as mkdocs.yml:

site_name:           PkgName
site_description:    Description
site_author:         You
theme:               readthedocs

  - codehilite
  - extra
  - tables
  - fenced_code
  - mdx_math # For LaTeX

  - assets/Documenter.css

  - assets/mathjaxhelper.js

docs_dir: 'build'

- Introduction:
- Tutorial:
  - Title 1: tutorials/
  - Title 2: tutorials/
  - Title 3: tutorials/
- Another Section:
  - Title 1: sec2/
  - Title 2: sec2/
  - Title 3: sec2/

Now to build the webpage, cd into /docs and run `mkdocs build`, and then `mkdocs serve`. Go to the local webserver that it tells you and check out your documentation.


Now that we are documented, let’s add testing. In the top of your package directory, make a folder /test. In there, make a file runtests.jl. You will want to make it say something like this:

#!/usr/bin/env julia
#Start Test Script
using PkgName
using Base.Test
# Run tests
println("Test 1")
@time @test include("test1.jl")
println("Test 2")
@time @test include("test2.jl")

This will run the files /test/test1.jl and /test/test2.jl and work if they both return a boolean. So make these test files use some of your package functionality and at the bottom make sure it returns a boolean saying whether the tests passed or failed. For example, you can have it make sure some number is close to what it should be, or you can just put `true` on the bottom on the file. Now use


And make sure your tests pass. Now setup accounts at Travis CI (for Linux and OSX testing) and AppVoyer (for Windows testing). Modify .travis.yml to be like the following:

# Documentation:
language: julia
  - linux
  - osx
  - nightly
  - release
  - 0.4.5
    - julia: nightly
  email: false
#  - if [[ -a .git/shallow ]]; then git fetch --unshallow; fi
  - julia -e 'Pkg.init(); Pkg.clone("")'
  - julia -e 'Pkg.test("PkgName",coverage=true)'
  - julia -e 'Pkg.clone("")'
  - julia -e 'cd(Pkg.dir("PkgName")); include(joinpath("docs", "make.jl"))'
  - julia -e 'cd(Pkg.dir("PkgName")); Pkg.add("Coverage"); using Coverage; Codecov.submit(Codecov.process_folder())'
  - julia -e 'cd(Pkg.dir("PkgName")); Pkg.add("Coverage"); using Coverage; Coveralls.submit(process_folder())'

If you are using matplotlib/PyPlot you will want to add

ENV["PYTHON"]="";"PyCall"); using PyPlot;

before Pkg.test(“PkgName”,coverage=true). Now edit your appvoyer.yml to be like the following:

  - JULIAVERSION: "julialang/bin/winnt/x86/0.4/julia-0.4-latest-win32.exe"
  - JULIAVERSION: "julialang/bin/winnt/x64/0.4/julia-0.4-latest-win64.exe"
    - JULIAVERSION: "julianightlies/bin/winnt/x86/julia-latest-win32.exe"
    - JULIAVERSION: "julianightlies/bin/winnt/x64/julia-latest-win64.exe"
    - master
    - /release-.*/
  - provider: Email
    on_build_success: false
    on_build_failure: false
    on_build_status_changed: false
# Download most recent Julia Windows binary
  - ps: (new-object net.webclient).DownloadFile(
  - set PATH=C:Miniconda3;C:Miniconda3Scripts;%PATH%
# Run installer silently, output to C:projectsjulia
  - C:projectsjulia-binary.exe /S /D=C:projectsjulia
# Need to convert from shallow to complete for Pkg.clone to work
  - IF EXIST .gitshallow (git fetch --unshallow)
  - C:projectsjuliabinjulia -e "versioninfo();
      Pkg.clone(pwd(), "PkgName");"PkgName")"
  - C:projectsjuliabinjulia --check-bounds=yes -e "Pkg.test("PkgName")"

Add Coverage

I was sly and already added all of the coverage parts in there! This is done by the commands which add Coverge.jl, the keyword coverage=true in Pkg.test, and then specific functions for sending the coverage data to appropriate places. Setup an account on Codecov and Coveralls.

Fix Up Readme

Now update your readme to match your documentation, and add the badges for testing, coverage, and docs from the appropriate websites.

Update Your Repository

Now push everything into your Git repository. `cd` into your package directory and using the command line do:

git add --all
git commit -m "Commit message"
git push origin master

or something of the like. On Windows you can use their GUI. Check your repository and make sure everything is there. Wait for your tests to pass.

Publish Your Package

Now publish your package. This step is optional, but if you do this then people can add your package by just doing `Pkg.add(“PkgName”)`. To do this, simply run the following:


This will give you a url. Put this into your browser and write a message with your pull request and submit it. If all goes well, they will merge the changes and your package will be registered with METADATA.jl.

That’s it! Now every time you commit, your package will automatically be tested, coverage will be calculated, and documentation will be updated. Note that for people to get the changes you made to your code, they will need to run `Pkg.checkout(“PkgName”)` unless you tag and publish a new version.

The post Finalizing Your Julia Package: Documentation, Testing, Coverage, and Publishing appeared first on Stochastic Lifestyle.