Author Archives: Tamás K. Papp

Switching from Common Lisp to Julia

By: Tamás K. Papp

Re-posted from:

I have written this post for developers in the Common Lisp community
who asked why I am switching to Julia. It may only be relevant for the
small set of people who use Common Lisp for scientific computing.

I used Common Lisp for scientific computing for a while, from 2008 to
about 2015, in combination with R and C++. This choice may surprise
people who don’t know about projects like
Maxima or
FEMLISP, but Common Lisp is not a bad
language for scientific
it has a great FFI, compilers like SBCL can
generate very fast code with a few hints, and the language itself is
composed of convenient features that interact nicely.

However, around 2012 I started to become very frustrated with Common
Lisp. Despite various attempts, it became very clear that libraries
for scientific computing were not goint to take off: there were many
one-person efforts (including mine), but very few of
them evolved into general tools.

Initially, I was puzzled by this: Common Lisp is an extremely
convenient and productive language. Experienced Lisp hackers can write
very complex, fast, and elegant libraries in reasonably short
time. Why did this not happen for numerical code?

The problem with Common Lisp

Now I think that one of the main reasons for this is that while you
can write scientific code in CL that will be (1) fast, (2) portable,
and (3) convenient, you cannot do all of these at the same
. Arrays provide a convenient example for this.


(make-array 5 :element-type 'double-float)

The standard does not
guarantee that this gives you an array of double-float: it may (if
the implementation provides them), otherwise you get an array of
element type T. This turned out to be a major difficulty for
implementing portable scientific code in Common Lisp.

However, this gets worse: while you can tell a function that operates
on arrays that these arrays have element type double-float, you
cannot dispatch on this, as Common Lisp does not have parametric
types. For example, if you want to write a sum as

(defmethod mysum ((vec vector))
  (let ((s 0))
    (loop for elt across vec
       do (incf s elt))

you can dispatch on the argument being a vector, but not on the
element type. The compiled code may be generic.

You can of course branch on the array element types and maybe
even paper over the whole mess with sufficient macrology (which is
what LLA ended up doing), but this
approach is not very extensible, as eventually you end up hardcoding a
few special types for which your functions will be "fast", otherwise
they have to fall back to a generic, boxed type. With multiple
arguments, the number of combinations explodes very quickly.

How Julia solves this problem

A comparable native implementation in Julia would be1

function mysum(vec::AbstractVector{T}) where T
    s = zero(T)
    for elt in vec
        s += elt

This is still generic: it works for all subtypes of AbstractVector
(including vectors and vector-like objects), but notice how the generated code
is conditional on the element type:

julia> @code_warntype mysum([1, 2, 3])

      s::Int64 = 0 # line 3:
      #temp#::Int64 = 1
      unless (Base.not_int)((#temp#::Int64 === (Base.add_int)((Base.arraylen)(vec::Array{Int64,1
})::Int64, 1)::Int64)::Bool)::Bool goto 14                                                     
      SSAValue(2) = (Base.arrayref)(vec::Array{Int64,1}, #temp#::Int64)::Int64
      SSAValue(3) = (Base.add_int)(#temp#::Int64, 1)::Int64
      elt::Int64 = SSAValue(2)
      #temp#::Int64 = SSAValue(3) # line 4:
      s::Int64 = (Base.add_int)(s::Int64, elt::Int64)::Int64
      goto 4
      14:  # line 6:
      return s::Int64

julia> @code_warntype mysum([1.0, 2.0, 3.0])

      s::Float64 = (Base.sitofp)(Float64, 0)::Float64 # line 3:
      #temp#::Int64 = 1
      unless (Base.not_int)((#temp#::Int64 === (Base.add_int)((Base.arraylen)(vec::Array{Float64
,1})::Int64, 1)::Int64)::Bool)::Bool goto 14                                                   
      SSAValue(2) = (Base.arrayref)(vec::Array{Float64,1}, #temp#::Int64)::Float64
      SSAValue(3) = (Base.add_int)(#temp#::Int64, 1)::Int64
      elt::Float64 = SSAValue(2)
      #temp#::Int64 = SSAValue(3) # line 4:
      s::Float64 = (Base.add_float)(s::Float64, elt::Float64)::Float64
      goto 4
      14:  # line 6:
      return s::Float64

I mentioned "vector-like objects" above, since I can choose different
representations for special objects. For example, to do calculations
with a vector of 1s, I can define

struct Ones{T <: Number} <: AbstractVector{T}

At this point, in order to calculate the sum above, I have two choices:

  1. Implement the relevant
    with functions like

    Base.length(x::Ones) = x.len

    and similarly for element access, etc. This would generate specialized code for the method above, reasonably efficient code, but still iterate over the "elements".

  2. In addition, I can define

    mysum(vec::Ones{T}) where T = vec.len * one(T)

    which would provide a method for mysum.

A sufficiently rich parametric type system with multiple dispatch
integrated into the language and supported by a JIT compiler is the
secret weapon of Julia. Most of the time, you don’t have to do
, as it happens automatically for concrete types. Sometimes,
you have to help the compiler a bit, by writing code where the result
is type
ie the result type just depends on the type (not the value) of the
arguments and can be inferred by the compiler. Sometimes you have to
nudge the compiler a bit, and sometimes you have to be careful not to
mess up type inference: for example, the zero(T) above gives a 0
of type T, always ensuring a correct type that does not change
during the summation.

Comparison of other language features

While I would say that multiple dispatch with parametric types
designed into the language from the ground up is the most important
feature of Julia, there are other language features worth comparing to
Common Lisp.

is supported. Because of infix syntax, the AST is not as simple as
S-expressions, but the tools to work with it are evolving fast. That
said, I don’t write as many macros as I did in Common Lisp. Parametric
types are so powerful that I rarely need macros for performance
reasons, and instead of syntax extensions, I often go for zero-cost
abstraction via functions and wrapper types. An interesting
metaprogramming tool in Julia is generated
which allow code generation based on argument templates, I use this
frequently. The equivalent of reader macros are called non-standard

in Julia.

The foreign function interface of Julia is seamlessly integrated into
the language and very convenient to use. Docstrings are almost the
same as in Common Lisp, but they support Markdown. Strings are UTF8 by
default, and very fast. The
community is very vibrant, open,
and helpful. Simple questions get an answer within minutes,
complicated ones (eg compiler internals) may take a bit longer, but
are usually answered within a few hours or a day at most. If you are
coming from the Common Lisp community, you will see quite a few
familiar faces.

The library ecosystem already surpasses that of Common Lisp, at least
for scientific programming. High-quality, well-tested code is
available for linear algebra including sparse matrices (most of it in
the standard library), optimization, differential equations, and
automatic differentiation. The
latter is simply
amazing: by providing a type for dual numbers and a few operations,
forward-mode AD can be used without any implementation
overhead. Plotting libraries are available (mostly using foreign
function backends), and R-like "dataframes" are under development.


Common Lisp has been and remains a great language, with many excellent
features that preceded other languages by decades. It has an ANSI
standard, which means that portable code written decades ago
will run on a recent implementation. This is a great advantage, but at
the same time this freezes the language development at the point the
standard was finalized. No matter how flexible and forward-looking it
is, it cannot predict and accommodate all possible advances for

In contrast, Julia is rapidly evolving. At this stage, code that was
written half a year ago is very likely to be broken with the most
recent release.2 The pace of change will most likely slow down a bit after 1.0 is released, but for now, expect a lot of churning.

On the other hand, programmers who used Common Lisp for scientific
computing have always expected to get their hands dirty, since so
little existing code was available. This is a good time to consider
investing into Julia instead: you will get more done with less work,
and you still get to program in a very elegant language that traces a
lot of its roots to the Lisp family.

  1. This is not the fastest, nor the most precise implementation, just a comparable example. [return]
  2. An elegant deprecation mechanism is available, but that can’t deal with some fundamental language changes. [return]

Branch prediction: yet another example

By: Tamás K. Papp

Re-posted from:

Tomas Lycken linked a very nice discussion on StackOverflow about branch prediction as a comment on the previous post. It has an intuitive explanation (read it if you like good metaphors) and some Java benchmarks. I was curious about how it looks in Julia.

The exercise is to sum elements in a vector only if they are greater than or equal to 128.

function sumabove_if(x)
    s = zero(eltype(x))
    for elt in x
        if elt  128
            s += elt

This calculation naturally has a branch in it, while the branchless
version, using ifelse, does not:

function sumabove_ifelse(x)
    s = zero(eltype(x))
    for elt in x
        s += ifelse(elt  128, elt, zero(eltype(x)))

The actual example has something different: using tricky bit-twiddling
to calculate the same value. I generally like to leave this sort of
thing up to the compiler, because it is much, much better at it than I
am, and I make mistakes all the time; worse, I don’t know what I
actually did when I reread the code 6 months later. But I included it
here for comparison:

function sumabove_tricky(x::Vector{Int64})
    s = Int64(0)
    for elt in x
        s += ~((elt - 128) >> 63) & elt

Following the original example on StackOverflow, we sum 2^15 random integers in 1:256. For this, we don’t need to worry about overflow. We also sum the sorted vector: this will facilitate branch predicion, since the various branches will be contiguous.

I also benchmark a simple version using generators:

sumabove_generator(x) = sum(y for y in x if y  128)
Benchmarks (\(μ\)s)

random sorted
if 139 28
ifelse 21 21
if & sort 96 n/a
tricky 27 27
generator 219 168

Benchmarks are in the table above. Note that

  1. for the version with if, working on sorted vectors is dramatically faster (about 5x).

  2. the non-branching ifelse version beats them hands down, and naturally it does not care about sorting.

  3. if you have to use if, then you are better off sorting, even if you take the time of that into account.

  4. generators are susprisingly bad.

  5. the tricky bit-twiddling version is actually worse than ifelse (which reinforces my aversion to it).

Self-contained code for everything is available below.

download code as code.jl

CPU pipelines: when more is less

By: Tamás K. Papp

Re-posted from:

I have been working on micro-optimizations for some simulation
code, and was reminded of a counter-intuitive artifact of modern CPU
architecture, which is worth a short post.

Consider (just for the sake of example) a very simple (if not
particularly meaningful) function,

f(x) = \begin{cases}
(x+2)^2 & \text{if } x \ge 0,\\
1-x & \text{otherwise}

with implementations

f1(x) = ifelse(x  0, abs2(x+2), 1-x)
f2(x) = x  0 ? abs2(x+2) : 1-x

f1 calculates both possibilities before choosing between them with
ifelse, while f2 will only calculate values on demand. So, intuitively, it should be faster.

But it isn’t…

julia> x = randn(1_000_000);

julia> using BenchmarkTools

julia> @btime f1.($x);
  664.228 μs (2 allocations: 7.63 MiB)

julia> @btime f2.($x);
  6.519 ms (2 allocations: 7.63 MiB)

…it is about 10x slower.

This can be understood as an artifact of the instruction
: your
x86 CPU likes to perform similar operations in staggered manner, and
it does not like branches (jumps) because they break the flow.

Comparing the native code reveals that while f1 is jump-free, the if in f2 results in a jump (jae):

julia> @code_native f1(1.0)
Filename: REPL[2]
        pushq   %rbp
        movq    %rsp, %rbp
        movabsq $139862498743472, %rax  # imm = 0x7F34468E14B0
        movsd   (%rax), %xmm2           # xmm2 = mem[0],zero
Source line: 1
        addsd   %xmm0, %xmm2
        mulsd   %xmm2, %xmm2
        movabsq $139862498743480, %rax  # imm = 0x7F34468E14B8
        movsd   (%rax), %xmm3           # xmm3 = mem[0],zero
        subsd   %xmm0, %xmm3
        xorps   %xmm1, %xmm1
        cmpnlesd        %xmm0, %xmm1
        andpd   %xmm1, %xmm3
        andnpd  %xmm2, %xmm1
        orpd    %xmm3, %xmm1
        movapd  %xmm1, %xmm0
        popq    %rbp
        nopw    %cs:(%rax,%rax)

julia> @code_native f2(1.0)
Filename: REPL[3]
        pushq   %rbp
        movq    %rsp, %rbp
Source line: 1
        xorps   %xmm1, %xmm1
        ucomisd %xmm1, %xmm0
        jae     L37
        movabsq $139862498680736, %rax  # imm = 0x7F34468D1FA0
        movsd   (%rax), %xmm1           # xmm1 = mem[0],zero
        subsd   %xmm0, %xmm1
        movapd  %xmm1, %xmm0
        popq    %rbp
        movabsq $139862498680728, %rax  # imm = 0x7F34468D1F98
        addsd   (%rax), %xmm0
        mulsd   %xmm0, %xmm0
        popq    %rbp
        nopl    (%rax)

In my application the speed gain was more modest, but still
sizeable. Benchmarking a non-branching version of your code is
sometimes worth it, especially if it the change is simple and both
branches of the conditional can be run error-free. If, for example, we
had to calculate

g(x) = x  0 ? (x+2) : 1-x

then we could not use ifelse without restricting the domain, since
√(x+2) would fail whenever x < -2.

Julia Base contains many optimizations like this: for a particularly
nice example see functions that use Base.null_safe_op.