GPU Programming in Julia

Scientific problems have been traditionally solved with powerful clusters of homogeneous CPUs connected in a variety of network topologies.
However, the number of supercomputers that employ accelerators has steadily been on the rise.
The latest Top500 list released at SC ‘15 shows that the number of supercomputers that employ accelerators has risen to 109.

Accelerators SC15

Accelerators that are employed in practice are mostly graphics processing units (GPUs), Xeon Phis and FPGAs.
These accelerators take advantage of many-core architectures which can be used to exploit coarse and fine grained parallelism.
However, the traditional problem with using GPUs and other accelerators has been the ease (or lack thereof) of programming them.
To this end, NVIDIA Corporation designed the currently pervasive Compute Unified Device Architecture (CUDA) to allow for a C-like interface for scientific and general purpose programming.
This was a considerable improvement over previous frameworks such as DirectX or OpenGL that required advanced skills in graphics programming.
However, CUDA would still feature low on a productivity curve, with programmers having to fine tune their applications for different devices and algorithms.
In this context, interactive programming on the GPU would provide tremendous benefits to scientists and programmers who not only wish to prototype their applications, but to deploy them with little or no code changes.

Julia on GPUs

Julia offers programmers the ability to code interactively on the GPU.
There are several libraries wrapped in Julia,
giving Julia users access to accelerated BLAS,
FFTs, sparse routines and
solvers, and deep learning.
With a combination of these packages, programmers can interactively develop custom GPU kernels.
One such example is the Conjugate Gradient, which has been benchmarked below:

Conjugate Gradient

However, one might argue that low-level wrapper libraries do not in any manner
increase programmer productivity as they involve working with obscure function interfaces.
In such a case, it would be ideal to have a clean array interface for arrays on the GPU with a
convenient standard library that operates on these arrays. Each operation would in turn be tuned with
regards to the device in question to achieve great performance. The folks over at ArrayFire have put together a high quality open source library to work on scientific problems with GPUs.

ArrayFire.jl

ArrayFire.jl is a set of Julia bindings to the library.
It is designed to mimic the Julia standard library in its versatility and ease of use, providing an easy-yet-powerful
rrray interface that points to locations on GPU memory.

Julia’s multiple dispatch and generic programming capabilities make it possible for users to write natural mathematical code and transparently leverage GPUs for performance.This is done by defining a type AFArray as a subtype of AbstractArray.
AFArray now acts as an interface to an array on device memory. A set of functions are imported from Base Julia and are dispatched across the new AFArray type. Thus users may be able to write code in Julia that runs on the CPU and port it to run on the GPU with very few code changes. In addition to functions that mimic Julia’s standard library, ArrayFire.jl provides powerful functions in image processing and computer vision, amongst others.

Usage

The following examples illustrate high level usage:

using ArrayFire

#Random number generation
a = rand(AFArray{Float64}, 100, 100)
b = randn(AFArray{Float64}, 100, 100)

#Transfer to device from the CPU
host_to_device = AFArray(rand(100,100))

#Transfer back to CPU
device_to_host = Array(host_to_device)

#Basic arithmetic operations
c = sin(a) + 0.5
d = a * 5

#Logical operations
c = a .> b
any_trues = any(c)

#Reduction operations
total_max = maximum(a)
colwise_min = min(a,2)

#Matrix operations
determinant = det(a)
b_positive = abs(b)
product = a * b
dot_product = a .* b
transposer = a'

#Linear Algebra
lu_fact = lu(a)
cholesky_fact = chol(a*a') #Multiplied to create a positive definite matrix
qr_fact = qr(a)
svd_fact = svd(a)

#FFT
fast_fourier = fft(a)

Benchmarks

ArrayFire.jl has also been benchmarked for common operations (Note that Julia’s default RNG and the one that ArrayFire uses are not directly comparable):

general

The benefits of accelerated code can be seen in real world applications.
Consider the following image segmentation demo on satellite footage of the Hurricane Katrina.
Image segmentation is an important step in weather forecasting,
and should be performed on many high definition images on a daily basis. In such a use-case,
interactive GPU programming would allow the applications designer to
leverage powerful graphics processing on the GPU with little or no code changes from his original prototype.
The application used the K-means algorithm which can easily be expressed in Julia
and accelerated by ArrayFire.jl.
It initializes some random clusters
and then reassigns the clusters according to Manhattan distances.

Another interesting example is non-negative matrix factorization, which is often used in linear algebra
and multivariate analysis. It is applied in fields such as computer vision,
document clustering, chemometrics, audio signal processing,
and recommender systems.
The following application implements the Lee-Seung algorithm:

NMF Benchmark

Changing Backends

ArrayFire.jl also has the added advantage that it can switch backends at runtime,
which allows a user to choose the appropriate backend according to hardware availability.

setBackend(AF_BACKEND_CUDA)

Future Work

  • Allowing ArrayFire.jl users to easily interface with other packages in the JuliaGPU ecosystem
    would allow them access to accelerated and possibly more memory-efficient kernels
    (for signal processing or deep learning, for example).

  • Currently, only dense linear algebra is supported. It would be worthwhile to wrap sparse linear algebra
    libraries and interface with them seamlessly.

  • Allow users to interface with packages such as GLVisualize.jl
    for 3D visualizations on the GPU using OpenGL (or Vulkan, its recently released successor.)