STAC-A2 Benchmark

By: Julia Computing, Inc.

Re-posted from: http://juliacomputing.com/blog/2016/05/25/stac-a2-benchmark.html

Julia Computing has recently completed an initial implementation of the STAC-A2 benchmark. STAC-A2 is an industry standard benchmark suite for testing compute-intensive analytic workloads for options pricing and risk management. The primary application defined in the STAC-A2 benchmark consists of the computation of various “Greeks”, derivatives of the calculus variety, for a particular type of exotic option, derivatives of the financial variety. The main benchmark includes a Monte Carlo simulation of the underlying security using the Andersen QE formulation of the Heston stochastic volatility model followed by American-exercise pricing of the option using the Longstaff and Schwartz method. The particular option being evaluated is a lookback, best-of option pricing a basket of securities. The benchmark tests scaling of an implementation by discretizing the simulation over a cube of time-steps, Monte Carlo paths, and number of assets in the current basket.

The initial Julia implementation of the STAC-A2 benchmark is a single process, single threaded implementation of the benchmark using Julia 0.4.5 on an Amazon Web Services (AWS) Elastic Compute Cloud (EC2) r3.2xlarge server. This initial Julia implementation mirrors the sequential algorithms of the example STAC-A2 reference model. Access is available to the unaudited report, configuration disclosure, and code for the current Julia implementation to STAC members with a premium subscription.

Monte Carlo Ferromagnet

By: Metals, Magnets, and Miscellaneous Materials

Re-posted from: https://albi3ro.github.io/M4/prerequisites/Monte-Carlo-Ferromagnet.html

Post Prerequisites: Monte Carlo Calculation of pi, Monte Carlo Markov Chain

Prerequisites: Statistical Mechanics

The physics

What process turns liquid water into a solid? As temperature lowers, materials transition from a regime where the desire to maximize entropy dominates to desiring to minimize energy. In transitions like liquid-solid, the energy arises from atom-atom forces. This problem falls into a category called first order phase transitions, which are difficult to study.

Instead of looking at the liquid-solid problem to understand phase transitions, we will look at a magnetic material undergoing a second order phase transition. In our approximation of the problem, the atoms exist at fixed positions on a lattice, and interact with their neighbor according to the following Hamiltonian,
\begin{equation}
{\cal H} = -J \sum_{<i,j>} S^z_i S_j^z
\end{equation}
, which is basically just the energy. This Ising Model has nearest neighbors interacting, and each spin variablesolely points in the $\pm z$ direction.

At a given temperature $T$, inverse temperature $\beta=1/T$, $k_b=1$, the occupancy of a given configuration $c_i$ follows the Maxwell-Boltzmann Probability Distribution,
\begin{equation}
P(c_i)=\frac{\mathrm{e}^{-\beta E(c_i)}}{\sum\limits_j \mathrm{e}^{-\beta E(c_j)}}
\end{equation}
We need to determine observables given this probability distribution.

The Numerics

In the post on the Markov Chain, each state was a location on our grid. Now our state is one configuration of all our $N$ spins. That means for an Ising spin we have $2^N$ different configurations. Yeah… We aren’t going to be exploring all of those.

So how do we get from one state to the next?

First, we choose a spin that we could flip. This potential spin is chosen randomly for convenience’s sake.

Then we use the Metropolis-Hastings Algorithm to decide whether or no to flip the spin and generate a new configuration.

We split the solution into two cases based on $\alpha = \frac{\pi_i}{\pi_j}= \frac{P(c_i)}{P(c_j)}$:

  • If $\alpha > 1$ then always accept the transition, $p_{i \to j} = 1$
  • If $\alpha < 1$, accept the transition with probability $\alpha = p_{i \to j}$

Does this obey detailed balance?

From the Monte Carlo Markov Chain post, we take our equation for detailed balance

\begin{equation}
\frac{p_{i \to j}}{ p_{j \to i}} = \frac{\pi_i}{\pi_j}
\end{equation}

How about ergodicity? We can reach any configuration by flipping spins.

For the Ising Ferromagnet, our $\alpha$ is
\begin{equation}
\alpha= \frac{Z}{Z} \frac{\mathrm{e}^{-\beta E_i}}{\mathrm{e}^{-\beta E_j}}
= \mathrm{e}^{\beta \left(E_j – E_i \right)}
\end{equation}
which is simply a function of difference in energy between two states. Therefore we don’t need to know the absolute energy at each point in time, just how the spin flip changes its local environment.

Tunable Parameters

I will say a little bit about the tunable parameters here, but Monte Carlo simulations are as much an art as a science, so I leave it up to you to play with the numbers and build intuition for what works.

Temperature

On the left I am performing a low temperature search of a couch store. I found a minimum, and even though I got stuck there, I thoroughly got to know what that minimum had to offer. On the right though, I’m performing a high temperature search of the Alexander Hills, Death Valley, California in pursuit of Neoproterzoic carbonates and dimictites. I needed to be able to explore a lot more of the environment.

While in a physical system like a magnet, temperature has a physical meaning, we can create other types of situations, like optimization or approximating a probability distribution, where we use temperature to describe how stuck we are to a particular minimum. This intuition also plays a role in interpreting the results of of our simulation.

Temperature can also provide other complications when we are studying critical phenomena, phase transitions. Near the critical temperature, computations get much more difficult. Elaboration in later post.

Size of Lattice

  • Larger lattice gives more precise results.
  • Larger lattice takes more memory and time.
  • Finite Size effects, to be disscussed later, display some interesting physics.

Coupling Constant J

Usually normalized out anyway…

Number of time steps

Each Monte Carlo time step is one complete sweep of the lattice, $N$ random flip attempts. The more time steps, the more accurate the results, but the longer you have to wait.

When to Measure

Though a time step does include $N$ spin flips, that doesn’t mean we have actually moved to a truly new configuration. If we want to randomly sample our configuration space, we need to wait a bit longer to sample. Too short a time, and our measurements are coupled and not accurate. Too long, and we are wasting computations. In a later post, I will look at the autocorrelation time, which is a good way to figure our if your measurement time is good.

Other potential Options

Some less important things: periodic versus open boundary conditions, shape and symmetries of simulation cell.

using PyPlot;
push!(LOAD_PATH,".")
using Lattices;

Instead of going into calculating all the lattice parameters again, we will use a class I define in the file Lattices.jl . This class contains

Lattice Types

  • Chain
  • Square
  • Honeycomb
    You can edit the file to make your own types.

Once a lattice is created, it contains Members of Type:

  • name, a string
  • l, length in number of unit cells
  • dim, dimension of lattice
  • a, array containing the basis vectors by which positions are generated
  • unit, array of positions inside a single unit
  • N, number of total sites
  • X, array of positions
  • nnei, number of nearest neighbors
  • neigh, Array of nearest neighbors [i][j], where i is site and j is 1:nnei

Today, I will just look at the square lattice, since that indicates much of the standard phase transition properties. Some of the lattices I have shown (kagome, triangular, …) are special frustrated lattices, and thus will behave very wierdly in this situation.

## Define l here
l=50;

lt=MakeLattice("Square",l);
S=ones(Int8,l,l);  #Our spins
dt=1/(lt.N);
# The energy contribution of just one site
function dE(i::Int)
    Eii=0;
    for j in 1:lt.nnei
        Eii+=S[lt.neigh[i,j]];
    end
    Eii*=-J*S[i];  # we are computing J sz_i sz_j for one i
    return Eii;
end
# The energy of the entire lattice
function E()
    Evar=0;
    for k in 1:lt.N
        Evar+=.5*dE(k);
    end
    return Evar;
end
# The magnetization of the entire lattice
function M()
    Mvar=0;
    for k in 1:lt.N
        Mvar+=S[k];
    end
    return Mvar;
end
"defined functions"

Adjustable Parameters

I have set up the simulation so that you can perform two different things. For one, you can set video=true and t to a small variable. Then in a new window you see what the configuration looks like each time you measure.

Or you can set video=false and t to a large variable, and actually measure the statistics for the system over a bunch of configurations.

beta=.7;
J=1;
t=100000;
video=false;
nskip=10;   # don't measure every sweep= better decorrelation
"Parameters set"
nmeas=Int64(t/nskip); # how many times we will measure
Ma=Array{Int32}(nmeas); # our magnetization measurements
Ea=Array{Int32}(nmeas); # our energy measurements
"done"
tm=1; #Our measurement time step
pygui(true)
for ti in 1:t
    for j in 1:lt.N
        i = rand(1:lt.N); #Choosing a random site
        de=dE(i);
        if(de>0 || rand()<exp(2*beta*de) )
            S[i]=-S[i]; #Switch the sign
        end
    end
    if isapprox(mod(ti,nskip),0)
        Ma[tm]=M();
        Ea[tm]=E();
        tm+=1;

        if video==true
            pygui(true)
            pcolor(S,cmap="winter")
            magnow=round(M()/lt.N,3)
            title("t: $ti  M: $magnow ")
            draw()
            sleep(.5)
            #nm=dec(ti,4)
            #savefig("Magnetimages2/pic_l50_bp7_$nm.png")
        end
    end
end
Mave=mean(Ma/lt.N);
Mstd=std(Ma/lt.N);
Eave=mean(Ea/lt.N);
Estd=std(Ea/lt.N);
Mave, Mstd
pygui(false)
title("Magnetization versus Time, Beta=$beta")
xlabel("Monte Carlo Steps")
ylabel("Average Magnetization")
plot(collect(1:10:5000),Ma[1:500]/lt.N)
annotate(" White Noise = Well Mixing",
xy=[500, Mave ],
xytext=[1000, Mave-.0125],
    xycoords="data",
    arrowprops=Dict("facecolor"=>"green"))
pygui(false)
plt[:hist](Ma/lt.N,bins=30,normed=true,label="Samples Normed");
x=collect(Mave-Mstd*3:.001:Mave+Mstd*3)
gaussian=1/(Mstd*sqrt(2*pi))*exp(-(x-Mave).^2/(2*Mstd^2));
plot(x,gaussian,linewidth=5,label="Gaussian Fit")
title("Magnetization Sample Distribution beta=$beta")
xlabel("Mangetization")
ylabel("Normed Counts")
legend(loc="upper right")
#savefig("Ferromagnet/maghist_bp3.png")
pygui(false)
title("Energy versus Time, Beta=$beta")
xlabel("Monte Carlo Steps")
ylabel("Average Energy")
plot(collect(1:10:5000),Ea[1:500]/lt.N)
annotate(" White Noise = Well Mixing",
    xy=[500, Eave ],
xytext=[1000, Eave-.03],
   xycoords="data",
   arrowprops=Dict("facecolor"=>"green"))
pygui(false)
plt[:hist](Ea/lt.N,bins=30,normed=true,label="Samples Normed");
x=collect(Eave-3*Estd:.001:Eave+3*Estd)
gaussian=1/(Estd*sqrt(2*pi))*exp(-(x-Eave).^2/(2*Estd^2));
plot(x,gaussian,linewidth=5,label="Gaussian Fit")
title("Energy Histogram, beta=$beta")
xlabel("Energy")
ylabel("Normed Counts")
#savefig("Ferromagnet/Ehist_bp3.png")

Example Results

So here are some example results I got.

Paramagnet

States of a Paramagnet at $\beta=0.2$

Samples over Time

Paramagnet Magnetization Samples. A properly tuned simulations should be showing white noise without correlations or getting stuck in particular areas.

Magnetization Histogram

Magnetization Histogram

Energy Histogram

Energy Histogram

Magnet

States of a Paramagnet at $\beta=0.7$

Magnet Histogram

Histogram of a magnetization at $\beta=0.7$

Energy Histogram

Histogram of energy at $\beta=0.7$

To be covered later:

This is an extremely rich problem. Since this post seems long enough for me already, I’ll leave these to exercises for you right now, or you can wait until I hold your hand through them.

  • Plot magnetization and energy as a function of temperature
  • What’s the transition temperature?
  • How does the dispersion change as a function of temperature? Use that to calculate specific heat and susceptibility
  • How do results change with system size?
  • Change dimension?
  • Put it on a different lattice. BE CAREFUL of lattices like triangular, checkerboard, pyrochlore, …
  • Ferromagnetic versus Antiferromagnetic coupling
  • Autocorrelation function
  • Structure Factorm, Fourier Transform

Finalizing Your Julia Package: Documentation, Testing, Coverage, and Publishing

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/finalizing-julia-package-documentation-testing-coverage-publishing/

In this tutorial we will go through the steps to finalizing a Julia package. At this point you have some functionality you wish to share with the world… what do you do? You want to have documentation, code testing each time you commit (on all the major OSs), a nice badge which shows how much of the code is tested, and put it into metadata so that people could install your package just by typing Pkg.add(“Pkgname”). How do you do all of this?

Note: At anytime feel free to checkout my package repository DifferentialEquations.jl which should be a working example.

Generate the Package and Get it on Github

First you will want to generate your package and get it on Github repository. Make sure you have a Github account, and then setup the environment variables in the git shell:

$ git config --global user.name "FULL NAME"
$ git config --global user.email "EMAIL"
$ git config --global github.user "USERNAME"

Now you can generate your package via

Pkg.generate("PkgName","license")

For the license, I tend to use MIT since it is quite permissive. This will tell you where your package was generated (usually in your Julia library folder). Take your function files and paste them into the /src folder in the package. In your /src folder, you will have a file PkgName.jl. This file defines your module. Generally you will want it to look something like this:

module PkgName
 
#Import your packages
using Pkg1, Pkg2, Pkg3
import Base: func1 #Any function you add dispatches to need to be imported directly
 
abstract AbType #Define abstract types before the types they abstract!
 
include("functionsForPackage.jl") #Include all the functionality
 
export coolfunc, coolfunc2 #Export the functions you want users to use
 
end

Now try on your computer using PkgName. Try your functions out. Once this is all working, this means you have your package working locally.

Write the Documentation

For documentation, it’s recommended to use Documenter.jl. The other packages, Docile.jl and Lexicon.jl, have been deprecated in favor of Documenter.jl. Getting your documentation to generate starts with writing docstrings. Docstrings are strings in your source code which are used for generating documentation. It is best to use docstrings because these will also show up in the REPL, i.e. if someone types ?coolfunc, your docstrings will show here.

To do this, you just add strings before your function definitions. For example,

 
"Defines a cool function. Returns some stuff"
function coolFunc()
  ...
end
 
"""
Defines an even cooler function. ``LaTeX``.
 
```math
SameAs$$LaTeX
```
 
### Returns
 * Markdown works in here
"""
function coolFunc2()
  ...
end

Once you have your docstrings together, you can use them to generate your documentation. Install Documenter.jl in your local repository by cloning the repository with Pkg.clone(“PkgLocation”). Make a new folder in the top directory of your package named /docs. In this directory, make a file make.jl and add the following lines to the file:

using Documenter, PkgName
 
makedocs(modules=[PkgName],
        doctest=true)
 
deploydocs(deps   = Deps.pip("mkdocs", "python-markdown-math"),
    repo = "github.com/GITHUBNAME/GITHUBREPO.git",
    julia  = "0.4.5",
    osname = "linux")

Don’t forget to change PkgName and repo to match your project. Now make a folder in this directory named /src (i.e. it’s /docs/src). Make a file named index.md. This will be the index of your documentation. You’ll want to make it something like this:

#Documentation Title
 
Some text describing the package.
 
## Subtitle
 
More text
 
## Tutorials
 
```
{contents}
Pages = [
    "tutorials/page1.md",
    "tutorials/page2.md",
    "tutorials/page3.md"
    ]
Depth = 2
```
 
## Another Section
```
{contents}
Pages = [
    "sec2/page1.md",
    "sec2/page2.md",
    "sec2/page3.md"
    ]
Depth = 2
```
 
## Index
 
```
{index}
```

At the top we explain the page. The next part adds 3 pages to a “Tutorial” section of the documentation, and then 3 pages to a “Another Section” section of the documentation. Now inside /docs/src make the directories tutorial and sec2, and add the appropriate pages page1.md, page2.md, page3.md. These are the Markdown files that the documentation will use to build the pages.

To build a page, you can do something like as follows:

# Title
 
Some text describing this section
 
## Subtitle
 
```
{docs}
PkgName.coolfunc
PkgName.coolfunc2
```

What this does is it builds the page with your added text/titles on the top, and then puts your docstrings in below. Thus most of the information should be in your docstrings, with quick introductions before each page. So if your docstrings are pretty complete, this will be quick.

Build the Documentation

Now we will build the documentation. cd into the /docs folder and run make.jl. If that’s successful, then you will have a folder /docs/build. This contains markdown files where the docstrings have been added. To turn this into a documentation, first install mkdocs. Now add the following file to your /docs folder as mkdocs.yml:

site_name:           PkgName
repo_url:            https://github.com/GITHUBUSER/PkgName
site_description:    Description
site_author:         You
theme:               readthedocs

markdown_extensions:
  - codehilite
  - extra
  - tables
  - fenced_code
  - mdx_math # For LaTeX

extra_css:
  - assets/Documenter.css

extra_javascript:
  - https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML
  - assets/mathjaxhelper.js

docs_dir: 'build'

pages:
- Introduction: index.md
- Tutorial:
  - Title 1: tutorials/page1.md
  - Title 2: tutorials/page2.md
  - Title 3: tutorials/page3.md
- Another Section:
  - Title 1: sec2/page1.md
  - Title 2: sec2/page2.md
  - Title 3: sec2/page3.md

Now to build the webpage, cd into /docs and run `mkdocs build`, and then `mkdocs serve`. Go to the local webserver that it tells you and check out your documentation.

Testing

Now that we are documented, let’s add testing. In the top of your package directory, make a folder /test. In there, make a file runtests.jl. You will want to make it say something like this:

#!/usr/bin/env julia
 
#Start Test Script
using PkgName
using Base.Test
 
# Run tests
 
tic()
println("Test 1")
@time @test include("test1.jl")
println("Test 2")
@time @test include("test2.jl")
toc()

This will run the files /test/test1.jl and /test/test2.jl and work if they both return a boolean. So make these test files use some of your package functionality and at the bottom make sure it returns a boolean saying whether the tests passed or failed. For example, you can have it make sure some number is close to what it should be, or you can just put `true` on the bottom on the file. Now use

Pkg.test("PkgName")

And make sure your tests pass. Now setup accounts at Travis CI (for Linux and OSX testing) and AppVoyer (for Windows testing). Modify .travis.yml to be like the following:

# Documentation: http://docs.travis-ci.com/user/languages/julia/
language: julia
os:
  - linux
  - osx
julia:
  - nightly
  - release
  - 0.4.5
matrix:
  allow_failures:
    - julia: nightly
notifications:
  email: false
script:
#  - if [[ -a .git/shallow ]]; then git fetch --unshallow; fi
  - julia -e 'Pkg.init(); Pkg.clone("https://github.com/GITHUBUSER/REPONAME")'
  - julia -e 'Pkg.test("PkgName",coverage=true)'
after_success:
  - julia -e 'Pkg.clone("https://github.com/MichaelHatherly/Documenter.jl")'
  - julia -e 'cd(Pkg.dir("PkgName")); include(joinpath("docs", "make.jl"))'
  - julia -e 'cd(Pkg.dir("PkgName")); Pkg.add("Coverage"); using Coverage; Codecov.submit(Codecov.process_folder())'
  - julia -e 'cd(Pkg.dir("PkgName")); Pkg.add("Coverage"); using Coverage; Coveralls.submit(process_folder())'

If you are using matplotlib/PyPlot you will want to add

ENV["PYTHON"]=""; Pkg.build("PyCall"); using PyPlot;

before Pkg.test(“PkgName”,coverage=true). Now edit your appvoyer.yml to be like the following:

environment:
  matrix:
  - JULIAVERSION: "julialang/bin/winnt/x86/0.4/julia-0.4-latest-win32.exe"
  - JULIAVERSION: "julialang/bin/winnt/x64/0.4/julia-0.4-latest-win64.exe"
matrix:
  allow_failures:
    - JULIAVERSION: "julianightlies/bin/winnt/x86/julia-latest-win32.exe"
    - JULIAVERSION: "julianightlies/bin/winnt/x64/julia-latest-win64.exe"
branches:
  only:
    - master
    - /release-.*/
 
notifications:
  - provider: Email
    on_build_success: false
    on_build_failure: false
    on_build_status_changed: false
 
install:
# Download most recent Julia Windows binary
  - ps: (new-object net.webclient).DownloadFile(
        $("http://s3.amazonaws.com/"+$env:JULIAVERSION),
        "C:projectsjulia-binary.exe")
  - set PATH=C:Miniconda3;C:Miniconda3Scripts;%PATH%
# Run installer silently, output to C:projectsjulia
  - C:projectsjulia-binary.exe /S /D=C:projectsjulia
 
build_script:
# Need to convert from shallow to complete for Pkg.clone to work
  - IF EXIST .gitshallow (git fetch --unshallow)
  - C:projectsjuliabinjulia -e "versioninfo();
      Pkg.clone(pwd(), "PkgName"); Pkg.build("PkgName")"
 
test_script:
  - C:projectsjuliabinjulia --check-bounds=yes -e "Pkg.test("PkgName")"

Add Coverage

I was sly and already added all of the coverage parts in there! This is done by the commands which add Coverge.jl, the keyword coverage=true in Pkg.test, and then specific functions for sending the coverage data to appropriate places. Setup an account on Codecov and Coveralls.

Fix Up Readme

Now update your readme to match your documentation, and add the badges for testing, coverage, and docs from the appropriate websites.

Update Your Repository

Now push everything into your Git repository. `cd` into your package directory and using the command line do:

git add --all
git commit -m "Commit message"
git push origin master

or something of the like. On Windows you can use their GUI. Check your repository and make sure everything is there. Wait for your tests to pass.

Publish Your Package

Now publish your package. This step is optional, but if you do this then people can add your package by just doing `Pkg.add(“PkgName”)`. To do this, simply run the following:

Pkg.update()
Pkg.register("PkgName")
Pkg.tag("PkgName")
Pkg.publish()

This will give you a url. Put this into your browser and write a message with your pull request and submit it. If all goes well, they will merge the changes and your package will be registered with METADATA.jl.

That’s it! Now every time you commit, your package will automatically be tested, coverage will be calculated, and documentation will be updated. Note that for people to get the changes you made to your code, they will need to run `Pkg.checkout(“PkgName”)` unless you tag and publish a new version.

The post Finalizing Your Julia Package: Documentation, Testing, Coverage, and Publishing appeared first on Stochastic Lifestyle.