GPU Programming in Julia

By: Julia Computing, Inc.

Re-posted from: http://juliacomputing.com/blog/2016/06/09/julia-gpu.html

Scientific problems have been traditionally solved with powerful clusters of homogeneous CPUs connected in a variety of network topologies.
However, the number of supercomputers that employ accelerators has steadily been on the rise.
The latest Top500 list released at SC ‘15 shows that the number of supercomputers that employ accelerators has risen to 109.

Accelerators SC15

Accelerators that are employed in practice are mostly graphics processing units (GPUs), Xeon Phis and FPGAs.
These accelerators take advantage of many-core architectures which can be used to exploit coarse and fine grained parallelism.
However, the traditional problem with using GPUs and other accelerators has been the ease (or lack thereof) of programming them.
To this end, NVIDIA Corporation designed the currently pervasive Compute Unified Device Architecture (CUDA) to allow for a C-like interface for scientific and general purpose programming.
This was a considerable improvement over previous frameworks such as DirectX or OpenGL that required advanced skills in graphics programming.
However, CUDA would still feature low on a productivity curve, with programmers having to fine tune their applications for different devices and algorithms.
In this context, interactive programming on the GPU would provide tremendous benefits to scientists and programmers who not only wish to prototype their applications, but to deploy them with little or no code changes.

Julia on GPUs

Julia offers programmers the ability to code interactively on the GPU.
There are several libraries wrapped in Julia,
giving Julia users access to accelerated BLAS,
FFTs, sparse routines and
solvers, and deep learning.
With a combination of these packages, programmers can interactively develop custom GPU kernels.
One such example is the Conjugate Gradient, which has been benchmarked below:

Conjugate Gradient

However, one might argue that low-level wrapper libraries do not in any manner
increase programmer productivity as they involve working with obscure function interfaces.
In such a case, it would be ideal to have a clean array interface for arrays on the GPU with a
convenient standard library that operates on these arrays. Each operation would in turn be tuned with
regards to the device in question to achieve great performance. The folks over at ArrayFire have put together a high quality open source library to work on scientific problems with GPUs.

ArrayFire.jl

ArrayFire.jl is a set of Julia bindings to the library.
It is designed to mimic the Julia standard library in its versatility and ease of use, providing an easy-yet-powerful
rrray interface that points to locations on GPU memory.

Julia’s multiple dispatch and generic programming capabilities make it possible for users to write natural mathematical code and transparently leverage GPUs for performance.This is done by defining a type AFArray as a subtype of AbstractArray.
AFArray now acts as an interface to an array on device memory. A set of functions are imported from Base Julia and are dispatched across the new AFArray type. Thus users may be able to write code in Julia that runs on the CPU and port it to run on the GPU with very few code changes. In addition to functions that mimic Julia’s standard library, ArrayFire.jl provides powerful functions in image processing and computer vision, amongst others.

Usage

The following examples illustrate high level usage:

using ArrayFire

#Random number generation
a = rand(AFArray{Float64}, 100, 100)
b = randn(AFArray{Float64}, 100, 100)

#Transfer to device from the CPU
host_to_device = AFArray(rand(100,100))

#Transfer back to CPU
device_to_host = Array(host_to_device)

#Basic arithmetic operations
c = sin(a) + 0.5
d = a * 5

#Logical operations
c = a .> b
any_trues = any(c)

#Reduction operations
total_max = maximum(a)
colwise_min = min(a,2)

#Matrix operations
determinant = det(a)
b_positive = abs(b)
product = a * b
dot_product = a .* b
transposer = a'

#Linear Algebra
lu_fact = lu(a)
cholesky_fact = chol(a*a') #Multiplied to create a positive definite matrix
qr_fact = qr(a)
svd_fact = svd(a)

#FFT
fast_fourier = fft(a)

Benchmarks

ArrayFire.jl has also been benchmarked for common operations (Note that Julia’s default RNG and the one that ArrayFire uses are not directly comparable):

general

The benefits of accelerated code can be seen in real world applications.
Consider the following image segmentation demo on satellite footage of the Hurricane Katrina.
Image segmentation is an important step in weather forecasting,
and should be performed on many high definition images on a daily basis. In such a use-case,
interactive GPU programming would allow the applications designer to
leverage powerful graphics processing on the GPU with little or no code changes from his original prototype.
The application used the K-means algorithm which can easily be expressed in Julia
and accelerated by ArrayFire.jl.
It initializes some random clusters
and then reassigns the clusters according to Manhattan distances.

Another interesting example is non-negative matrix factorization, which is often used in linear algebra
and multivariate analysis. It is applied in fields such as computer vision,
document clustering, chemometrics, audio signal processing,
and recommender systems.
The following application implements the Lee-Seung algorithm:

NMF Benchmark

Changing Backends

ArrayFire.jl also has the added advantage that it can switch backends at runtime,
which allows a user to choose the appropriate backend according to hardware availability.

setBackend(AF_BACKEND_CUDA)

Future Work

  • Allowing ArrayFire.jl users to easily interface with other packages in the JuliaGPU ecosystem
    would allow them access to accelerated and possibly more memory-efficient kernels
    (for signal processing or deep learning, for example).

  • Currently, only dense linear algebra is supported. It would be worthwhile to wrap sparse linear algebra
    libraries and interface with them seamlessly.

  • Allow users to interface with packages such as GLVisualize.jl
    for 3D visualizations on the GPU using OpenGL (or Vulkan, its recently released successor.)

Using Julia’s Type System For Hidden Performance Gains

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/using-julias-type-system-hidden-performance-chunkedarrays-growablearrays-ellipsisnotation/

What I want to share today is how you can use Julia’s type system to hide performance gains in your code. What I mean is this: in many cases you may find out that the optimal way to do some calculation is not a “clean” solution. What do you do? What I want to do is show how you can define special arrays which are wrappers such that these special “speedups” are performed in the background, while having not having to keep all of that muck in your main algorithms. This is easiest to show by example.

The examples I will be building towards are useful for solving ODEs and SDEs. Indeed, these tricks have all been implemented as part of DifferentialEquations.jl and so these examples come from a real use case! They really highlight a main feature of Julia: since Julia code is fast (as long as you have type stability!), you don’t need to worry about writing code outside of Julia, and you can take advantage of higher-level features (with caution). In Python, normally one would only get speed benefits by going down to C, and so utilizing these complex objects would not get speed benefits over simply using numpy arrays in the most vectorized fashion. The same holds for R. In MATLAB… it would be really tough to implement most of this in MATLAB!

Taking Random Numbers in Chunks: ChunkedArrays

Let’s say we need to take random numbers in a loop like the following:

for i = 1:N
  dW = randn(size(u))
  #Do some things and add dW to u
end

While this is the “intuitive” code to write, it’s not necessarily the best. While there have been some improvements made since early Julia, in principle it’s just slower to make 1000 random numbers via randn() than to use randn(1000). This is because of internal speedups due to caching, SIMD, etc. and you can find mentions of this fact all over the web especially when people are talking about fast random number generators like from the VSL library.

So okay, what we really want to do is the following. Every “bufferSize” steps, create a new random number dW which is of size size(u)*bufferSize, and go through using the buffer until it is all used up, and then grab another buffer.

for i = 1:N
  if i%bufferSize == 0
    dW = randn(size(u),bufferSize)
  end
  #Do some things and add dW[..,i] to u
end

But wait? What if we don’t always use one random number? Sometimes the algorithm may need to use more than one! So you can make an integer k which tracks the current state in the buffer, and then at each point where it can be incremented, you add the conditional to grab a new buffer, etc. Also, what if you want to have the buffer generated in parallel? As you can see, code complexity explosion, just to go a little faster?

This is where ChunkedArrays come in. What I did is defined an array which essentially does the chunking/buffering in the background, so that way the code in the algorithm could be clean. A ChunkedArray is a wrapper over an array, and then used the next command to hide all of this complexity. Thus, to generate random numbers in chunks to get this speed improvement, you can use code like this:

rands = ChunkedArray(u)
for i = 1:N
  if i%bufferSize == 0
    dW = next(rands)
  end
  #Do some things and add dW[..,i] to u
end

Any time another random number is needed, you just call next. It internally stores an array and the state of the buffer, and the next function automatically check / replenishes the buffer, and can launch another process to do this in parallel if the user wants. Thus we get the optimal solution without sacrificing cleanliness. I chopped off about 10% of a runtime in Euler-Maruyama code in DifferentialEquations.jl by switching to ChunkedArrays, and haven’t thought about doing a benchmark since.

Safe Vectors of Arrays and Conversion: GrowableArrays

First let’s look at the performance difference between Vectors of Arrays and higher-dimensional contiguous arrays when using them in a loop. Julia’s arrays can take in a parametric type which makes the array hold arrays, this makes the array essentially an array of pointers. The issue here is that this adds an extra cost every time the array is dereferenced. However, for high-dimensional arrays, the : way of referencing has to generate a slice each time. Which way is more performant?

function test1()
  u = Array{Int}(4,4,3)
  u[:,:,1] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[:,:,2] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[:,:,3] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  j = 1
  for i = 1:100000
    j += sum(u[:,:,1] + u[:,:,2] + 3u[:,:,3] + u[:,:,i%3+1] -  u[:,:,(i%j)%2+1])
  end
end
 
 
 
function test2()
  u = Vector{Matrix{Int}}(3)
  u[1] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[2] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  u[3] = [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  j = 1
  for i = 1:100000
    j += sum(u[1] + u[2] + 3u[3] + u[i%3+1] - u[(i%j)%2+1])
  end
end
 
 
function test3()
  u = Array{Int}(4,4,4)
  u[1,:,:] = reshape([1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1],(1,4,4))
 
  u[2,:,:] = reshape([1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1],(1,4,4))
 
  u[3,:,:] = reshape([1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1],(1,4,4))
 
  j = 1
  for i = 1:100000
    j += sum(u[1,:,:] + u[2,:,:] + 3u[3,:,:] + u[i%3+1,:,:] - u[(i%j)%2+1,:,:])
  end
end
 
#Pre-compile
test1()
test2()
test3()
 
t1 = @elapsed for i=1:10 test1() end
t2 = @elapsed for i=1:10 test2() end
t3 = @elapsed for i=1:10 test3() end
 
println("Test results: t1=$t1, t2=$t2, t3=$t3")
#Test results: t1=1.239379946, t2=0.576053075, t3=1.533462129

So using Vectors of Arrays is fast for dereferecing.

Now think about adding to an array. If you have a Vector of pointers and need to resize the array, it’s much easier to resize and copy over some pointers then it is to copy over all of the arrays. So, if you’re going to grow an array in a loop, the Vector of Arrays is the fastest implementation! Here’s a quick benchmark from GrowableArrays.jl:

using GrowableArrays, EllipsisNotation
using Base.Test
 
tic()
const NUM_RUNS = 100
const PROBLEM_SIZE = 1000
function test1()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = u
  for i = 1:PROBLEM_SIZE
    uFull = hcat(uFull,u)
  end
  uFull
end
 
function test2()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = u
 
  for i = 1:PROBLEM_SIZE
    uFull = vcat(uFull,u)
  end
  uFull
end
 
function test3()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{Int}(0)
  sizehint!(uFull,PROBLEM_SIZE*16)
  append!(uFull,vec(u))
 
  for i = 1:PROBLEM_SIZE
    append!(uFull,vec(u))
  end
  reshape(uFull,4,4,PROBLEM_SIZE+1)
  uFull
end
 
function test4()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{Array{Int}}(0)
  push!(uFull,copy(u))
 
  for i = 1:PROBLEM_SIZE
    push!(uFull,copy(u))
  end
  uFull
end
 
function test5()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{Array{Int,2}}(0)
  push!(uFull,copy(u))
 
  for i = 1:PROBLEM_SIZE
    push!(uFull,copy(u))
  end
  uFull
end
 
function test6()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = Vector{typeof(u)}(0)
  push!(uFull,u)
 
  for i = 1:PROBLEM_SIZE
    push!(uFull,copy(u))
  end
  uFull
end
 
function test7()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = GrowableArray(u)
  for i = 1:PROBLEM_SIZE
    push!(uFull,u)
  end
  uFull
end
 
function test8()
  u =    [1 2 3 4
          1 3 3 4
          1 5 6 3
          5 2 3 1]
 
  uFull = GrowableArray(u)
  sizehint!(uFull,PROBLEM_SIZE)
  for i = 1:PROBLEM_SIZE
    push!(uFull,u)
  end
  uFull
end
 
println("Run Benchmarks")
println("Pre-Compile")
#Compile Test Functions
test1()
test2()
test3()
test4()
test5()
test6()
test7()
test8()
 
println("Running Benchmarks")
t1 = @elapsed for i=1:NUM_RUNS test1() end
t2 = @elapsed for i=1:NUM_RUNS test2() end
t3 = @elapsed for i=1:NUM_RUNS test3() end
t4 = @elapsed for i=1:NUM_RUNS test4() end
t5 = @elapsed for i=1:NUM_RUNS test5() end
t6 = @elapsed for i=1:NUM_RUNS test6() end
t7 = @elapsed for i=1:NUM_RUNS test7() end
t8 = @elapsed for i=1:NUM_RUNS test8() end
 
println("Benchmark results: $t1 $t2 $t3 $t4 $t5 $t6 $t7 $t8")
 
#Benchmark results: 1.923640854 2.131108443 0.012493308 0.00866045 0.005246504 0.00532613 0.00773568 0.00819909

As you can see in test7 and test8, I created a “GrowableArray” which is an array which acts like a Vector of Arrays. However, it has an added functionality that if you copy(G), then what you get is the contiguous array. Therefore in the loop you can grow the array the quickest way as a storage machine, but after the loop (say to plot the array), but at any time you can copy it to a contiguous array which is more suited for interop with C and other goodies.

It also hides a few painful things. Notice that in the code we pushed a copy of u (copy(u)). This is because when u is an array, it’s only the reference to the array, so if we simply push!(uFull,u), every element of uFull is actually the same item! This benchmark won’t catch this issue, but try changing u and you will see that every element of uFull changes if you don’t use copy. This can be a nasty bug, so instead we build copy() into the push!() command for the GrowableArray. This gives another issue. Since copying a GrowableArray changes it, you need to make sure push! doesn’t copy on arguments of GrowableArrays (to create GrowableArrays of GrowableArrays). However, this is easily managed via dispatch.

Helping Yourself and the Julia Community

Simple projects like these lead to re-usable solutions to improve performance while allowing for ease of use. I have just detailed some projects I have personally done (and have more to do!), but there are others that should be pointed out. I am fond of projects like VML.jl which speedup standard functions, and DoubleDouble.jl which implements efficient quad-precision numbers that you can then use in place of other number types.

I think Julia will succeed not by the “killer packages” that are built in Julia, but by a rich type ecosystem that will make everyone want to build their “killer package” in Julia.

The post Using Julia’s Type System For Hidden Performance Gains appeared first on Stochastic Lifestyle.

STAC-A2 Benchmark

By: Julia Computing, Inc.

Re-posted from: http://juliacomputing.com/blog/2016/05/25/stac-a2-benchmark.html

Julia Computing has recently completed an initial implementation of the STAC-A2 benchmark. STAC-A2 is an industry standard benchmark suite for testing compute-intensive analytic workloads for options pricing and risk management. The primary application defined in the STAC-A2 benchmark consists of the computation of various “Greeks”, derivatives of the calculus variety, for a particular type of exotic option, derivatives of the financial variety. The main benchmark includes a Monte Carlo simulation of the underlying security using the Andersen QE formulation of the Heston stochastic volatility model followed by American-exercise pricing of the option using the Longstaff and Schwartz method. The particular option being evaluated is a lookback, best-of option pricing a basket of securities. The benchmark tests scaling of an implementation by discretizing the simulation over a cube of time-steps, Monte Carlo paths, and number of assets in the current basket.

The initial Julia implementation of the STAC-A2 benchmark is a single process, single threaded implementation of the benchmark using Julia 0.4.5 on an Amazon Web Services (AWS) Elastic Compute Cloud (EC2) r3.2xlarge server. This initial Julia implementation mirrors the sequential algorithms of the example STAC-A2 reference model. Access is available to the unaudited report, configuration disclosure, and code for the current Julia implementation to STAC members with a premium subscription.