Tag Archives: performance

Accelerate Your Julia Code with Effective Profiling Methods

By: Great Lakes Consulting

Re-posted from: https://blog.glcs.io/profiling_allocations

This post was written by Steven Whitaker.

In this Julia for Devs post,we will discussusing Julia’s Profile standard libraryfor performance and allocation profiling.We will illustrate these toolswith example codeand then show how to improve the code.

This post will showcasethe powerful techniques we usedto significantly accelerateour clients simulations,resulting in a remarkable 25% reduction in run timeof code that was already highly optimized for performance.The impact of these techniqueson our client’s workis a testament to their effectivenessand should inspire youin your own projects.

Many new Julia developersface poor code performance,but these techniques can net10x or even 100x faster run times!

While we won’t be sharing client-specific code,we will provide similar examplesthat are practical and applicableto a wide range of simulations.This approachshould give you the confidenceto apply these techniquesin your work.

Here are some of the key ideas we will focus on:

  • Pinpoint areas of improvement with Profile.@profile.
  • Track down allocations with Profile.Allocs.@profile.
  • Improve run time and reduce allocations with @generated functions.

Let’s dive in!

Profiling in Julia

Profiling is a crucial toolfor locating performance bottlenecks.This knowledge is invaluableas it guides development effortsto the parts of the codethat will have the most significant impact on run time.Understanding this processwill keep you informedand in controlof your code’s performance.

Here is some example codewe will useto illustrate profiling in Julia:

using StaticArrays: SVectorfunction kernel_original(x::SVector{N}) where N    y = zeros(N + 1)    y[1] = x[1]    for i = 2:N        y[i] = 0.5 * y[i-1] + x[i]    end    y[end] = sum(@view(y[1:end-1]))    return SVector{N+1}(y)endfunction workflow_original()    total = 0.0    for i = 1:100        x = SVector{10, Float64}(rand(10))        y = kernel_original(x)        total += y[end]    end    return totalend

The main idea with this exampleis that there is a workflow,workflow_original,that calls out to a core computationfor many different input values.

To profile this code,we will use the Profile standard library.Since Profile implements a statistical profiler,we need to ensure the code we profileruns long enoughto reduce the impact of noise in the measurements(see the documentation for more info).So,let’s see how long workflow_original takes(using BenchmarkTools.jl):

julia> using BenchmarkToolsjulia> @btime workflow_original();  6.410 s (300 allocations: 25.00 KiB)

6 microseconds is very fastcompared to the default profiling sample delayof 1 millisecond.Therefore,we must run the workflow many timesto get enough samples.We have found that 1000–5000 samplesare usually plentyfor identifying performance bottlenecks,though slower code will likely need more samplesand/or a larger sample delay.

So,to get approximately 1000 samples,we will need to run the workflow\( 1000 \cdot \frac{1 \mathrm{ms}}{0.006 \mathrm{ms}} = 166,667 \) times.We’ll round up to 200,000 for good measure.

Let’s see how to profile the code:

julia> using Profilejulia> Profile.clear()julia> workflow_original();julia> Profile.@profile for _ in 1:200_000 workflow_original() end

A couple of notes:

  • The Profile.clear() stepis not strictly necessary.However, a habit of always clearing the profile dataensures one doesn’t inadvertently spoil their profileswith old data from previous profiling results.
  • We called workflow_original once before profilingto avoid profiling JIT compilation.

Now we want to display the profile data.One way is via Profile.print,which prints a textual representationof the profile to the console,but this isn’t the most efficient methodfor inspecting the data.Typically,profile data is visualizedusing a flamegraph.

The Profiling docs list several packagesfor visualizing profiles.We will use ProfileCanvas.jl,which creates an HTML filewe can view in a web browserand interactively inspect the profiling results:

julia> using ProfileCanvasjulia> ProfileCanvas.view()

This code creates and displays a flamegraph:

Profile of original workflow

One thing that stands out in this flamegraphis the three yellow rectangles.These indicate occurrences of garbage collection (GC),which implies the code allocated memory.Since these yellow blocksrepresent a decent portion of the run time(as indicated by the width of the rectangles),let’s investigate.

Allocation Profiling

We will use Julia’s allocation profiling toolsto further inspectthe allocations we know the workflow has.The process is similar to performance profiling,with the following differences:

  • We will use Profile.Allocs.@profileand pass the option sample_rate = 1.0to record all allocations.In a larger workflow with many more allocations,a smaller sample_rate is advised(the default value is 0.1).
  • We will run the workflow just one time.
  • We will visualize the results with ProfileCanvas.view_allocs.

Here’s how to profile allocations:

julia> using Profile, ProfileCanvasjulia> Profile.Allocs.clear()julia> workflow_original();julia> Profile.Allocs.@profile sample_rate = 1.0 workflow_original();julia> ProfileCanvas.view_allocs()

Allocation profile of original workflow

In this flamegraph,the widths of each rectanglecorrespond to how many allocations were made.In this example,each yellow blockis the same width,meaning they all contributedthe same number of allocations.

Now we need to determinewhether we can do anything about these allocations.

Moving up from the bottom of the flamegraphlets us trace the call stackto see where these allocations came from.For example,we can see that GenericMemorywas called from Array,which itself was called from Array,and so on.Eventually,we get to workflow_original.

If we hover our mouse cursor over that block,it will display the file and line number:

Mouse hover displaying file and line info

(In this case,I defined workflow_originalin the first REPL promptof a fresh Julia session,so that’s why REPL[1] shows up for the file name.)

Looking at these profile resultsshows us that the offending lines are:

  • x = SVector{10, Float64}(rand(10)) (in workflow_original)
  • y = zeros(N + 1) (in kernel_original)

This makes sense;in each of these lines,we explicitly create an array(via rand and zeros),which allocates memory.

But it turns outwe can eliminate these allocations!

Eliminating the first allocationrequires knowledgeof the StaticArrays.jl package.In particular,the @SVector macrocan create an SVectordirectly from standard array expressions(like rand)without allocating memory.So,instead of using the SVector constructor,we can write:

x = @SVector rand(10)

The second allocation,however,is a bit trickier to remove.

Reducing Allocations with @generated Functions

What makes it difficultto remove the allocationin kernel_originalis the elements of the vector ydepend on previous elementsof the vector.That means we have to compute one elementto compute the next,which means we have to store that element somehow.If we knew exactly how long y would be,we could store the computationsin local (scalar) variables.However, the function needs to work with any input size;we don’t want to create a separate methodfor each possible input size.

At least, not by hand.

It turns outJulia can automatethe creation of these specialized methods.The trick is to use the @generated macro.

A method annotated with @generateduses type informationto produce specialized implementationsof the methoddepending on the input types.And since type information is available at compile time,this specialization occurs at compile time,leading to run time improvements.

Using a @generated functionto replace kernel_originalwill allow us to, essentially,move the run time allocationto compile time.

Here’s what the new function looks like:

@generated function kernel_generated(x::SVector{N}) where N    assignments = [:(y1 = x[1])]    for i = 2:N        yprev = Symbol(:y, i - 1)        yi = Symbol(:y, i)        push!(assignments, :($yi = 0.5 * $yprev + x[$i]))    end    yend = Symbol(:y, N + 1)    sum_expr = reduce((a, b) -> :($a + $b), (Symbol(:y, i) for i = 1:N))    push!(assignments, :($yend = $sum_expr))    vars = ntuple(i -> Symbol(:y, i), N + 1)    return quote        $(assignments...)        return SVector{$(N + 1), Float64}($(vars...))    endend

The first thing you might notice,especially if you are unfamiliar with metaprogramming,is that this function looks quite differentfrom kernel_original.So, let’s unpack this a bit:

  • A @generated functionneeds to return an expression.The compiler will then compilethe code resulting from the expression.Finally, at run time,the compiled code will be used,not the code usedto generate the compiled expression.In other words,this function must returnthe specialized code itself,not the result of a run time computation.
  • The returned expressionis created with a quote block.This quote blockinterpolates (using $) expressionsbuilt up earlier in the function.The computationsto build up these expressionsoccur only at compile time.

If we want to inspectwhat the generated function looks like,we need to refactor the code just a bit.Essentially,we’ll create a regular Julia functionthat returns an expression,and the @generated functionwill just call that function:

function _gen_kernel_generated(::Type{<:SVector{N}}) where N    # Same code as `kernel_generated` above.end@generated kernel_generated(x::SVector) = _gen_kernel_generated(x)

Then we can call _gen_kernel_generated directlyto see what code actually runs:

julia> _gen_kernel_generated(typeof(@SVector rand(1)))quote    #= REPL[2]:18 =#    y1 = x[1]    y2 = y1    #= REPL[2]:19 =#    return SVector{2, Float64}(y1, y2)endjulia> _gen_kernel_generated(typeof(@SVector rand(2)))quote    #= REPL[2]:18 =#    y1 = x[1]    y2 = 0.5y1 + x[2]    y3 = y1 + y2    #= REPL[2]:19 =#    return SVector{3, Float64}(y1, y2, y3)endjulia> _gen_kernel_generated(typeof(@SVector rand(3)))quote    #= REPL[2]:18 =#    y1 = x[1]    y2 = 0.5y1 + x[2]    y3 = 0.5y2 + x[3]    y4 = (y1 + y2) + y3    #= REPL[2]:19 =#    return SVector{4, Float64}(y1, y2, y3, y4)end

Yep, the implementation looks right!

We can also seethat the generated codeavoids creating an arrayto store intermediate results,instead storing computationsin local variables.But we didn’t have to writeany of those methods ourselves!Using @generated allows usto maintain just one functionto generate all these specialized implementations.

Let’s benchmark the new implementation:

julia> @btime workflow_generated();  1.093 s (0 allocations: 0 bytes)

Nice, six times fasterand no allocations!

And here’s the profile:

Profile of optimized workflow

(Click here to see the code for profiling.) juliausing StaticArrays: @SVector, SVector@generated function kernel_generated(x::SVector{N}) where N assignments = [:(y1 = x[1])] for i = 2:N yprev = Symbol(:y, i - 1) yi = Symbol(:y, i) push!(assignments, :($yi = 0.5 * $yprev + x[$i])) end yend = Symbol(:y, N + 1) sum_expr = reduce((a, b) -> :($a + $b), (Symbol(:y, i) for i = 1:N)) push!(assignments, :($yend = $sum_expr)) vars = ntuple(i -> Symbol(:y, i), N + 1) return quote $(assignments...) return SVector{$(N + 1), Float64}($(vars...)) endendfunction workflow_generated() total = 0.0 for i = 1:100 x = @SVector rand(10) y = kernel_generated(x) total += y[end] end return totalendusing BenchmarkTools@btime workflow_generated();using Profile, ProfileCanvasProfile.clear()Profile.@profile for _ in 1:200_000 workflow_generated() endProfileCanvas.view()Note,there’s no need for allocation profilingbecause there are no allocations.

There aren’t obvious places for improvement,so I’d say we did a pretty good joboptimizing the code.

Summary

In this post,we saw how to use the Profile standard libraryto profile the run timeand the allocationsof a piece of code.We also illustratedhow a @generated functioncan eliminate run time allocationsand speed up the code.These were key ideas we usedto help one of our clientsspeed up their simulations.

Do you need help pinpointing performance bottlenecksor tracking down allocationsin your code?Contact us, and we can help you out!

Additional Links

]]>

Optimize Flight Simulations with Improved Type-Stability

By: Great Lakes Consulting

Re-posted from: https://blog.glcs.io/sim-performance-type-stability

This post was written by Steven Whitaker.

In this Julia for Devs post,we will discuss improving the performanceof simulation code written in Juliaby eliminating sources of type-instabilities.

We wrote another postdetailing what type-stability isand how type-instabilities can degrade performance.We also showed how SnoopCompile.jl and Cthulhu.jlcan be used to pinpoint causes of type-instability.

This post will cover some of the type-instabilitieswe helped one of our clients overcome.

Our client is a technology innovator.They are building a first-of-its-kind logistics systemfocused on autonomous electric deliveryto reduce traffic and air pollution.Their aim is to provideefficient delivery services for life-saving productsto people in both urban and rural areas.Julia is helping them power criticalGuidance, Navigation, and Control (GNC) systems.

With this client, we:

  • Eliminated slowdown-related failureson the most important simulation scenario.
  • Decreased the compilation time of the scenario by 30%.
  • Improved the slowest time stepsfrom 300 ms to 10 ms (30x speedup),enabling 2x real-time performance.

We will not share client-specific code,but will provide similar examplesto illustrate root-cause issuesand suggested resolutions.

Here are the root-cause issues and resolutions we will focus on:

  • Help type-inference by unrolling recursion.
  • Standardize the output of different branches.
  • Avoid loops overTuples.
    • Use SnoopCompile.jl to reveal dynamic dispatches.
    • Investigate functionswith Cthulhu.jl.
  • Avoid dictionaries that map to functions.

Let’s dive in!

Help Type-Inference by Unrolling Recursion

One of the interesting problems we sawwas that there was a part of the client’s codethat SnoopCompile.jl reportedwas resulting in calls to inference,but when we inspected the code with Cthulhu.jlthe code looked perfectly type-stable.

This code consisted of a set of functionsthat recursively called each other,traversing the model treeto grab data from all the submodels.

As it turns out,recursion can pose difficulties for Julia’s type-inference.Basically,if type-inference detects recursionbut cannot prove it terminates(based only on the types of inputs—rememberthat type-inference occurs before runtime),inference gives up,resulting in code that runs like it is type-unstable.(See this Discourse post and comments and links thereinfor more information.)

The solution we implementedwas to use a @generated functionto unroll the recursion at compile time,resulting in a flat implementationthat could be correctly inferred.

Here’s an example that illustratesthe essence of the recursive code:

# Grab all the data in the entire model tree beginning at `model`.get_data(model::NamedTuple) = (; data = model.data, submodels = get_submodel_data(model.submodels))# This generated function is necessary for type-stability.# It calls `get_data` on each of the fields of `submodels`# and returns a `NamedTuple` of the results.# (This is not the generated function implemented in the solution.)@generated function get_submodel_data(submodels::NamedTuple)    assignments = map(fieldnames(submodels)) do field        Expr(:kw, field, :(get_data(submodels.$field)))    end    return Expr(:tuple, Expr(:parameters, assignments...))endget_submodel_data(::@NamedTuple{}) = (;)

Note that in this exampleget_data calls get_submodel_data,which in turn calls get_dataon the submodels.

Here’s the code for unrolling the recursion:

function _gen_get_data(T, path)    subT = _typeof_field(T, :submodels)    subpath = :($path.submodels)    return quote        (;            data = $path.data,            submodels = $(_gen_get_submodel_data(subT, subpath)),        )    endend# This function determines the type of `model.field`# given just the type of `model` (so we can't just call `typeof(model.field)`).# This function is necessary because we need to unroll the recursion# using a generated function, which means we have to work in the type domain# (because the generated function is generated before runtime).function _typeof_field(::Type{NamedTuple{names, T}}, field::Symbol) where {names, T}    i = findfirst(n -> n === field, names)    return T.parameters[i]endfunction _gen_get_submodel_data(::Type{@NamedTuple{}}, subpath)    return quote        (;)    endendfunction _gen_get_submodel_data(subT, subpath)    assignments = map(fieldnames(subT)) do field        T = _typeof_field(subT, field)        path = :($subpath.$field)        Expr(:kw, field, _gen_get_data(T, path))    end    return Expr(:tuple, Expr(:parameters, assignments...))end@generated get_data_generated(model::NamedTuple) = _gen_get_data(model, :model)

Unfortunately,this example doesn’t reproduce the issue our client had,but it does show how to use a @generated functionto unroll recursion.Note that there is still recursion:_gen_get_data and _gen_get_submodel_data call each other.The key, though, is that this recursion happens before inference,which means that when get_data_generated is inferred,the recursion has already taken place,resulting in unrolled codewithout any recursionthat might cause inference issues.

When we implemented this solution for our client,we saw the total memory utilization of their simulationdecrease by ~35%.This was enough to allow them to disable garbage collectionduring the simulation,speeding it upto faster than real-time!And this was the first timethis simulation had run faster than real-time!

Standardize Output of Different Branches

The client had different parts of their modelupdate at different frequencies.As a result,at any particular time steponly a subset of all the submodelsactually needed to update.Here’s an example of what this might look like:

function get_output(model, t)    if should_update(model, t)        out = update_model(model, t)    else        out = get_previous_output(model)    end    return outend

Unfortunately,update_model and get_previous_outputreturned values of different types,resulting in type-instability:the output type of get_outputdepended on the runtime result of should_update.

Furthermore,this function was called at every time pointon every submodel (and every sub-submodel, etc.),so the type-instability in this functionaffected the whole simulation.

The issue was that update_modeltypically returned the minimal subset of informationactually needed for the specific model,whereas get_previous_output was genericand returned a wider set of information.For example,maybe update_model would return a NamedTuplelike (x = [1, 2], xdot = [0, 0]),while get_previous_output would returnsomething like (x = [1, 2], xdot = [0, 0], p = nothing, stop_sim = false).

To fix this issue,rather than manually updating the return valuesof all the methods of update_modelfor all the submodels in the system,we created a function standardize_outputthat took whatever NamedTuple returned by update_modeland added the missing fieldsthat get_previous_output included.Then,the only change needed in get_outputwas to call standardize_output:

out = update_model(model, t) |> standardize_output

The result of making this changewas a 30% decrease in compilation timefor their simulation!

Avoid Loops over Tuples

The client stored submodelsof a parent modelas a Tuple or NamedTuple.This makes sense for type-stabilitybecause each submodel was of a unique type,so storing them in this waypreserved the type informationwhen accessing the submodels.In contrast,storing the submodels as a Vector{Any}would lose the type informationof the submodels.

However,type-stability problems arisewhen looping over Tuplesof different types of objects.The problem is that the compiler needsto compile code for the body of the loop,but the body of the loop needsto be able to handleall types included in the Tuple.As a result,the compiler must resort to dynamic dispatchin the loop body(but see the note on union-splitting further below).

President waiting for receptionist to look up his office

Here’s an example of the issue:

module TupleLoopfunction tupleloop(t::Tuple)    for val in t        do_something(val)    endenddo_something(val::Number) = val + 1do_something(val::String) = val * "!"do_something(val::Vector{T}) where {T} = isempty(val) ? zero(T) : val[1]do_something(val::Dict{String,Int}) = get(val, "hello", 0)end

Using SnoopCompile.jl reveals dynamic dispatches to do_something:

julia> using SnoopCompileCorejulia> tinf = @snoop_inference TupleLoop.tupleloop((1, 2.0, "hi", [10.0], Dict{String,Int}()));julia> using SnoopCompilejulia> tinfInferenceTimingNode: 0.019444/0.020361 on Core.Compiler.Timings.ROOT() with 4 direct childrenjulia> itrigs = inference_triggers(tinf); mtrigs = accumulate_by_source(Method, itrigs)2-element Vector{SnoopCompile.TaggedTriggers{Method}}: eval_user_input(ast, backend::REPL.REPLBackend, mod::Module) @ REPL ~/.julia/juliaup/julia-1.11.5+0.x64.linux.gnu/share/julia/stdlib/v1.11/REPL/src/REPL.jl:247 (1 callees from 1 callers) tupleloop(t::Tuple) @ Main.TupleLoop /path/to/TupleLoop.jl:3 (3 callees from 1 callers)

Looking at tupleloop with Cthulhu.jl:

julia> using Cthulhujulia> ascend(mtrigs[2].itrigs[1])Choose a call for analysis (q to quit):     do_something(::String) >     tupleloop(::Tuple{Int64, Float64, String, Vector{Float64}, Dict{String, Int64}}) at /path/to/TupleLoop.jltupleloop(t::Tuple) @ Main.TupleLoop /path/to/TupleLoop.jl:3 3 function tupleloop(t::Tuple{Int64, Float64, String, Vector{Float64}, Dict{String, Int64}}::Tuple)::Core.Const(nothing) 5 5     for val::Any in t::Tuple{Int64, Float64, String, Vector{Float64}, Dict{String, Int64}} 6         do_something(val::Any) 7     end 9 9 end

And we see the problem!Even though the Tuple is inferred,the loop variable val is inferred as Any,which means that calling do_something(val)must be a dynamic dispatch.

Note that in some casesJulia can perform union-splitting automaticallyto remove the dynamic dispatchcaused by this type-instability.In this example,union-splitting occurs when the Tuplecontains 4 (by default) or fewer unique types.However,it’s not a general solution.

One way to remove the dynamic dispatchwithout relying on union-splittingis to eliminate the loop:

do_something(t[1])do_something(t[2])

But we can quickly seethat writing this codeis not at all generic;we have to hard-codethe number of calls to do_something,which means the code will only workwith Tuples of a particular length.Fortunately,there’s a way around this issue.We can write a @generated functionto have the compiler unroll the loopfor us in a generic way:

@generated function tupleloop_generated(t::Tuple)    body = [:(do_something(t[$i])) for i in fieldnames(t)]    return quote        $(body...)        return nothing    endend

(Note that this code would also workif we specified t::NamedTuplein the method signature.)

Due to the way @generated functions work,SnoopCompile.jl still detects dynamic dispatches,but note that tupleloop_generateddoes not have any dynamic dispatches reported:

julia> using SnoopCompileCorejulia> tinf = @snoop_inference TupleLoop.tupleloop_generated((1, 2.0, "hi", [10.0], Dict{String,Int}()));julia> using SnoopCompilejulia> tinfInferenceTimingNode: 0.022208/0.050369 on Core.Compiler.Timings.ROOT() with 5 direct childrenjulia> itrigs = inference_triggers(tinf); mtrigs = accumulate_by_source(Method, itrigs)3-element Vector{SnoopCompile.TaggedTriggers{Method}}: (g::Core.GeneratedFunctionStub)(world::UInt64, source::LineNumberNode, args...) @ Core boot.jl:705 (1 callees from 1 callers) eval_user_input(ast, backend::REPL.REPLBackend, mod::Module) @ REPL ~/.julia/juliaup/julia-1.11.5+0.x64.linux.gnu/share/julia/stdlib/v1.11/REPL/src/REPL.jl:247 (1 callees from 1 callers) var"#s1#1"(::Any, t) @ Main.TupleLoop none:0 (3 callees from 1 callers)

And we can verify with Cthulhu.jlthat there are no more dynamic dispatches in tupleloop_generated:

julia> using Cthulhujulia> ascend(mtrigs[2].itrigs[1])Choose a call for analysis (q to quit): >   tupleloop_generated(::Tuple{Int64, Float64, String, Vector{Float64}, Dict{String, Int64}})       eval at ./boot.jl:430 => eval_user_input(::Any, ::REPL.REPLBackend, ::Module) at /cache/build/tester-amdci5-12/julialang/julia-release-1-dot-1tupleloop_generated(t::Tuple) @ Main.TupleLoop /path/to/TupleLoop.jl:11Variables  #self#::Core.Const(Main.TupleLoop.tupleloop_generated)  t::Tuple{Int64, Float64, String, Vector{Float64}, Dict{String, Int64}}Body::Core.Const(nothing)    @ /path/to/TupleLoop.jl:11 within `tupleloop_generated`    @ /path/to/TupleLoop.jl:15 within `macro expansion`1  %1  = Main.TupleLoop.do_something::Core.Const(Main.TupleLoop.do_something)   %2  = Base.getindex(t, 1)::Int64         (%1)(%2)   %4  = Main.TupleLoop.do_something::Core.Const(Main.TupleLoop.do_something)   %5  = Base.getindex(t, 2)::Float64         (%4)(%5)   %7  = Main.TupleLoop.do_something::Core.Const(Main.TupleLoop.do_something)   %8  = Base.getindex(t, 3)::String         (%7)(%8)   %10 = Main.TupleLoop.do_something::Core.Const(Main.TupleLoop.do_something)   %11 = Base.getindex(t, 4)::Vector{Float64}         (%10)(%11)   %13 = Main.TupleLoop.do_something::Core.Const(Main.TupleLoop.do_something)   %14 = Base.getindex(t, 5)::Dict{String, Int64}         (%13)(%14)   @ /path/to/TupleLoop.jl:16 within `macro expansion`       return Main.TupleLoop.nothing   

Here we have to examine the so-called “Typed” code(since the source code was generated via metaprogramming),but we see that there is no loop in this code.As a result,each call to do_somethingis a static dispatchwith a concretely inferred input.Hooray!

Avoid Dictionaries that Map to Functions

The client registered functionsfor updating their simulation visualizationvia a dictionary that mapped from a String keyto the appropriate update function.

Sometimes it can be convenientto have a dictionary of functions,for example:

d = Dict{String, Function}(    "sum" => sum,    "norm" => norm,    # etc.)x = [1.0, 2.0, 3.0]d["sum"](x) # Compute the sum of the elements of `x`d["norm"](x) # Compute the norm of `x`

This allows you to write generic codethat can call the appropriate intermediate functionbased on a key supplied by the caller.

You could use multiple dispatch to achieve similar results,but it requires a bit more thoughtto organize the code in such a waythat ensures the caller has access to the types to dispatch on.

As another alternative,you could also have the callerjust pass in the function to call.But again,it takes a bit more effortto organize the codeto make it work.

Unfortunately,using a dictionary in this wayis type-unstable:Julia can’t figure out what functionwill be calleduntil runtime,when the precise dictionary key is known.And since the function is unknown,the type of the result of the function is also unknown.

One partial solutionis to use type annotations:

d[func_key](x)::Float64

Then at least the output of the functioncan be used in a type-stable way.However,this only works if all the functions in the dictionaryreturn values of the same typegiven the same input.

A slightly less stringent alternativeis to explicitly convertthe result to a common type,but this requires conversion to be possible.

Our client updated a dictionaryusing the output of the registered function,so the full solution we implemented for our clientwas to remove the dictionaryand instead have explicit branches in the code.That is,instead of

updates[key] = d[key](updates[key])

we had

if key == "k1"    updates[key] = f1(updates[key]::OUTPUT_TYPE_F1)elseif key == "k2"    updates[key] = f2(updates[key]::OUTPUT_TYPE_F2)# Additional branches as neededend

Note that we needed the type annotationsOUTPUT_TYPE_F1 and OUTPUT_TYPE_F2because updates had an abstractly typed value type.The key that makes this solution workis recognizing that in the first branchupdates[key] is the output of f1from the previous time step in the simulation(and similarly for the other branches).Therefore,in each branch we know what the type of updates[key] is,so we can give the compiler that type information.

Also note that the previously mentioned ideasof using multiple dispatchor just passing in the functions to usedon’t work in this situationwithout removing the updates dictionary(and refactoring the affected code).

Making the above changecompletely removed type-instabilitiesin that part of the client’s code.

Summary

In this post,we explored a few problemsrelated to type-stabilitythat we helped our client resolve.We were able to diagnose issuesusing SnoopCompile.jl and Cthulhu.jland make code improvementsthat enabled our client’smost important simulation scenarioto pass tests for the first time.This was possiblebecause our solutions enabled the scenarioto run faster than real-timeand reduced compilation time by 30%.

Do you have type-instabilities that plague your Julia code?Contact us, and we can help you out!

Additional Links

The cover image backgroundof a person at the start of a racewas found at freepik.com.

The cartoon about looping over Tupleswas generated with AI.

]]>

Type-Stability with SnoopCompile.jl and Cthulhu.jl for High-Performance Julia

By: Great Lakes Consulting

Re-posted from: https://blog.glcs.io/type-stability

This post was written by Steven Whitaker.

The Julia programming languageis a high-level languagethat boasts the ability to achieve C-like speeds.Julia can run fast,despite being a dynamic language,because it is compiledand has smart type-inference.

Type-inference is the processof the Julia compilerreasoning about the types of objects,enabling compilation to create efficient machine codefor the types at hand.

However,Julia code can be written in a waythat prevents type-inference from succeeding—specifically,by writing type-unstable functions.(I’ll explain type-instability later on.)When type-inference fails,Julia has to compile generic machine codethat can handle any type of input,sacrificing the C-like performanceand instead running more like an interpreted languagesuch as Python.

Fortunately,there are tools that Julia developers can useto track down what code causes type-inference to fail.Among the most powerful of these toolsare SnoopCompile.jl and Cthulhu.jl.Using these tools,developers can fix type-inference failuresand restore the C-like performancethey were hoping to achieve.

In this post,we will learn about type-stabilityand how it impacts performance.Then we will see how to useSnoopCompile.jl and Cthulhu.jlto locate and resolve type-instabilities.

Type-Stability

A function is type-stableif the type of the function’s output can be concretely determinedgiven the types of the inputs to the function,without any runtime information.

To illustrate, consider the following function methods:

f(x::Int) = "stable"f(x::Float64) = rand(Bool) ? 1 : 2.0

In this example,if we call f(x) where x is an Int,the compiler can figure outthat the output will be a Stringwithout knowing the value of x,so f(x::Int) is type-stable.In other words,it doesn’t matter whether x is 1, -1, or 176859431;the return value will always be a Stringif x is an Int.

On the other hand,if we call f(x) where x is a Float64,the compiler doesn’t knowwhether the output will be an Int or a Float64because that depends on the result of rand(Bool),which is computed at runtime.Therefore,f(x::Float64) is type-unstable.

Here’s a more subtle example of type-instability:

function g(x)    if x < 0        return 0    else        return x    endend

In this example,g(x) is type-unstablebecause the output will either be an Intor whatever the type of x is,and it all depends on the value of x,which isn’t known at compile time.(Note, however,that g(x) is type-stableif x is an Intbecause then both branches of the if statementreturn the same type of value.)

And sometimes a function that might look type-stablecan be type-unstabledepending on the input.For example:

h(x::Array) = x[1] + 1

In this case,h([1]) is type-stable,but h(Any[1]) is not.Why?Because with h([1]),x is a Vector{Int},so the compiler knowsthat the type of x[1] will be Int.On the other hand,with h(Any[1]),x is a Vector{Any},so the compiler thinksx[1] could be of any type.

To reiterate:a function is type-stableif the compiler can figure outthe concrete type of the return valuegiven only the types of the inputs,without any runtime information.

When Compilation Occurs

Another aspect of type-inferencethat is useful to understandis when compilation (including type-inference) occurs.

In a static language like C,an entire program is compiledbefore any code runs.This is possible because the types of all variablesare known in advance,so machine code specific to those typescan be generated in advance.

In an interpreted language like Python,no code is ever compiledbecause variables are dynamic,meaning their types aren’t really ever knownuntil variables are actually used(i.e., during runtime).

Julia programs can liepretty much anywhere betweenthe extremes of C and Python,and where on that spectrum a program liesdepends on type-stability.

In a just-in-time (JIT) compiled language like Julia,compilation occurs once types are known.

  • If a Julia program is completely type-stable,type-inference can figure out the types of all variablesin the programbefore running any code.As a result,the entire program can be compiledas if it were written in a static language.This is what allows Julia to achieve C-like speeds.
  • If a Julia program is entirely type-unstable,every function has to be compiled individually.In this case,compilation occurs at the momentthe function is calledbecause that’s when the runtime informationof all the input types is finally known.Furthermore,the machine code for a type-unstable functioncannot be efficientbecause it must be able tohandle a wide rangeof potential types.As a result,despite being compiled,the code runs essentially like an interpreted language.

Running a Julia program with type-instabilitiesis like driving down the streetand hitting all the red lights.Julia will compile all the codefor which type-inference succeedsand then start running.But when the program reaches a function callthat could not be inferred,that’s like a car stopping at a red light;Julia stops running the codeto compile the function callnow that it knows the runtime types of the inputs.After the function is compiled,the program can continue execution,like how the car can continue drivingonce the light turns green.

Type-Stability and Performance

As this analogy implies,and as I’ve stated before,type-stability has performance implications.Type-instabilities can cause various performance degradations, including:

  • Dynamic (aka runtime) dispatch.If the compiler knows the input types to a function,the generated machine code can include a callto the specific method determined by those types.But if the compiler doesn’t know those types,the machine code has to include instructionsto perform dynamic dispatch.As a result,rather than jumping directly to the correct method,Julia has to spend runtime CPU cyclesto look up the correct method to call.
  • Increased memory allocations.If the compiler doesn’t know what typea variable will have,it’s impossible to put it in a registeror even allocate stack space for it.As a result,it has to be heap-allocatedand managed by the garbage collector.
  • Suboptimal compiled code.Imagine summing the contents of an array in a loop.If the compiler knows the array contains just Float64s,it can perform optimizations to compute the sumas efficiently as possible,e.g., by using specialized CPU instructions.Such optimizations cannot occurif the compiler doesn’t know what type of datait’s working with.

Here’s an example(inspired by this Stack Overflow answer)that illustrates the impacttype-stability can have on performance:

# Type-unstable because `x` is a non-constant global variable.x = 0f() = [i + x for i = 1:10^6]# Type-stable because `y` is constant and therefore always an `Int`.const y = 0g() = [i + y for i = 1:10^6]using BenchmarkTools@btime f() #  16.868 ms (1998983 allocations: 38.13 MiB)@btime g() # 190.755 s (3 allocations: 7.63 MiB)

Note that the type-unstable versionis two orders of magnitude slower!Also note, however,that this is an extreme examplewhere essentially the entire computationis type-unstable.In practice,some type-instabilities will not impact performance very much.Type-stability mainly matters in “hot loops”,i.e., in parts of the codethat run very frequentlyand contribute to a significant portionof the program’s overall run time.

Detecting Type-Instabilities with SnoopCompile.jl

Now the question is,how do we know if or where our code is type-unstable?One excellent toolfor discovering where type-instabilities occur in codeis SnoopCompile.jl.This package provides functionalityfor reporting how many timesa Julia program needs to stop to compile code.(Remember that a perfectly type-stable programcan compile everything in one go,so every time execution stops for compilationindicates a type-instability was encountered.)

Let’s use an example to illustrate how to use SnoopCompile.jl.First, the code we want to analyze:

module Originalstruct Alg1 endstruct Alg2 endfunction process(alg::String)    if alg == "alg1"        a = Alg1()    elseif alg == "alg2"        a = Alg2()    end    data = get_data(a)    result = _process(a, data)    return resultendget_data(::Alg1) = (1, 1.0, 0x00, 1.0f0, "hi", [0.0], (1, 2.0))function _process(::Alg1, data)    val = data[1]    if val < 0        val = -val    end    result = map(data) do d        process_item(d, val)    end    return resultendprocess_item(d::Int, val) = d + valprocess_item(d::AbstractFloat, val) = d * valprocess_item(d::Unsigned, val) = d - valprocess_item(d::String, val) = d * string(val)process_item(d::Array, val) = d .+ valprocess_item(d::Tuple, val) = d .- valget_data(::Alg2) = rand(5)_process(::Alg2, data) = error("not implemented")end

We’ll use the @snoop_inference macroto analyze this code.Note that this macro should be usedin a fresh Julia session(after loading the code to be analyzed,but before running anything)to get the most accurate analysis results.

julia> using SnoopCompileCorejulia> tinf = @snoop_inference Original.process("alg1");julia> using SnoopCompilejulia> tinfInferenceTimingNode: 0.144601/0.247183 on Core.Compiler.Timings.ROOT() with 8 direct children

You can consult the SnoopCompile.jl docsfor more information about what we just did,but for now,notice that displaying tinf revealed 8 direct children.That means compilation occurred 8 timeswhile running Original.process("alg1").If this function were completely type-stable,@snoop_inference would have reported just 1 direct child,so we know there are type-instabilities somewhere.

Each of the 8 direct childrenis an inference trigger,i.e., calling the specific methodindicated in the inference triggercaused compilation to occur.We can collect the inference triggers:

julia> itrigs = inference_triggers(tinf) Inference triggered to call process(::String) from eval (./boot.jl:430) inlined into REPL.eval_user_input(::Any, ::REPL.REPLBackend, ::Module) (/cache/build/tester-amdci5-12/julialang/julia-release-1-dot-11/usr/share/julia/stdlib/v1.11/REPL/src/REPL.jl:261) Inference triggered to call process_item(::Int64, ::Int64) from #1 (./REPL[1]:30) with specialization (::var"#1#2")(::Int64) Inference triggered to call process_item(::Float64, ::Int64) from #1 (./REPL[1]:30) with specialization (::var"#1#2")(::Float64) Inference triggered to call process_item(::UInt8, ::Int64) from #1 (./REPL[1]:30) with specialization (::var"#1#2")(::UInt8) Inference triggered to call process_item(::Float32, ::Int64) from #1 (./REPL[1]:30) with specialization (::var"#1#2")(::Float32) Inference triggered to call process_item(::String, ::Int64) from #1 (./REPL[1]:30) with specialization (::var"#1#2")(::String) Inference triggered to call process_item(::Vector{Float64}, ::Int64) from #1 (./REPL[1]:30) with specialization (::var"#1#2")(::Vector{Float64}) Inference triggered to call process_item(::Tuple{Int64, Float64}, ::Int64) from #1 (./REPL[1]:30) with specialization (::var"#1#2")(::Tuple{Int64, Float64})

The first inference triggercorresponds to compiling the top-level process functionwe called(this is the inference trigger we always expect to see).But then it looks like Julia had to stop runningto compile several different methods of process_item.

Inference triggers tell us that type-instabilities existedwhen calling the given functions,but what we really want to know iswhere these type-instabilities originated.You’ll note that each displayed inference trigger abovealso indicates the calling functionby specifying from <calling function>.(Note that the from #1 in the above exampleindicates process_item was calledfrom an anonymous function.)

We can use accumulate_by_sourceto get an aggregated viewof what functions made calls via dynamic dispatch:

julia> mtrigs = accumulate_by_source(Method, itrigs)2-element Vector{SnoopCompile.TaggedTriggers{Method}}: eval_user_input(ast, backend::REPL.REPLBackend, mod::Module) @ REPL ~/.julia/juliaup/julia-1.11.5+0.x64.linux.gnu/share/julia/stdlib/v1.11/REPL/src/REPL.jl:247 (1 callees from 1 callers) (::var"#1#2")(d) @ Main REPL[1]:30 (7 callees from 7 callers)

From this,we can see that the example codereally has only one problematic function:the anonymous function var"#1#2".

Diving in with Cthulhu.jl

Now that we have a rough ideaof where the type-instabilities come from,we can drill down into the codeand pinpoint the precise causeswith Cthulhu.jl.We can use the ascend functionon an inference triggerto start investigating:

julia> using Cthulhujulia> ascend(itrigs[2]) # Skip `itrigs[1]` because that's the top-level compilation that should always occur.

ascend provides a menu that shows process_itemand the anonymous function.Select the anonymous function and press Enter.Here’s a screenshot of the Cthulhu output:

Cthulhu output

Reading the output of Cthulhu.jltakes some time to get used to(especially when it can’t display source code,as in this example),but the main thing to rememberis that red is bad.See the Cthulhu.jl README for more information.

In this example,the source of the type-instabilitywas fairly easy to pinpoint.I annotated the screenshotto indicate from where the type-instability arose,which is this Core.Box thing.These are always bad;they are essentially containersthat can hold values of any type,hence the type-instability that ariseswhen accessing the contents.In this particular case,Core.getfield(#self#, :val) indicatesval is a variablethat was captured by the anonymous function.

Once we determine what caused the type-instability,the solution varies on a case-by-case basis.Some potential solutions may include:

  • Ensure different branches of an if statementreturn data of the same type.
  • Add a type annotation to help out inference.For example,
    x = Any[1]y = do_something(x[1]::Int)
  • Make sure a container type has a concrete element type.For example, x = Int[], not x = [].
  • Avoid loops over heterogeneous Tuples.
  • Use let blocks to define closures.(See this section of the Julia manual for more details.)

We’ll use this last solution in our example.The anonymous function in questionis defined by the do block in _process.So, let’s fix the issue of the captured variable val:

module Corrected# All other code is the same as in module `Original`.function _process(::Alg1, data)    val = data[1]    if val < 0        val = -val    end    f = let val = val        d -> process_item(d, val)    end    result = map(f, data)    return resultend# All other code is the same as in module `Original`.end

Now let’s see what @snoop_inference says:

julia> using SnoopCompileCorejulia> tinf = @snoop_inference Corrected.process("alg1");julia> using SnoopCompilejulia> tinfInferenceTimingNode: 0.113669/0.183888 on Core.Compiler.Timings.ROOT() with 1 direct children

There’s just one direct child.Hooray, type-stability!

Let’s see how performance compares:

julia> using BenchmarkToolsjulia> @btime Original.process("alg1");  220.506 ns (16 allocations: 496 bytes)julia> @btime Corrected.process("alg1");  51.104 ns (8 allocations: 288 bytes)

Awesome, the improved code is ~4 times faster!

Summary

In this post,we learned about type-stabilityand how type-instabilities affect compilationand runtime performance.We also walked through an examplethat demonstrated how to use SnoopCompile.jl and Cthulhu.jlto pinpoint the sources of type-instability in a program.Even though the example in this postwas a relatively easy fix,the principles discussed apply to more complicated programs as well.And, of course,check out the documentation for SnoopCompile.jl and Cthulhu.jlfor further examples to bolster your understanding.

Do you have type-instabilities that plague your Julia code?Contact us, and we can help you out!

Additional Links

]]>