Tag Archives: C++

Using Julia’s C Interface to Utilize C Libraries

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/using-julias-c-interface-utilize-c-libraries/

I recently ran into the problem that Julias’s (GNU Scientific Library) GSL.jl library is too new, i.e. some portions don’t work as of early 2016. However, to solve my problem I needed access to adaptive Monte Carlo integration methods. This means it was time to go in depth in Julia’s C interface. I will go step by step on how I created and compiled the C code, and called it from Julia.

What I wished to use was the GSL Vegas functions. Luckily, there is a good example on this page for how to use the library. I started with this code. I then modified it a bit. To start, I changed the main function header to:

int monte_carlo_integrate (double (*integrand)(double *,size_t,void *),double* retRes,double* retErr,int mode,int dim,double* xl,double* xu,size_t calls){

Before it took in no values, but I will want to be able to do most things from Julia. First of all, I changed the function name from main to avoid namespace issues. This is just good practice when working with shared libraries. then I added a bunch of arguments. The first argument is a function pointer, where the function is defined just as g is in the example. Then I pass in pointers which we become the return values. I add mode (1,2,3 are the different integration types), dim, xl, xu, and calls as passed in values to easily extend the functionality. The rest of the C-code stays mostly the same: I change the parts which reference g to integrand (I like the function name better) and comment out the print functions (they tend to no print correctly). So I end up with a C-function as follows:

#include <stdlib.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_monte.h>
#include <gsl/gsl_monte_plain.h>
#include <gsl/gsl_monte_miser.h>
#include <gsl/gsl_monte_vegas.h>
int monte_carlo_integrate (double (*integrand)(double *,size_t,void *),double* retRes,double* retErr,int mode,int dim,double* xl,double* xu,size_t calls){
  double res, err;
  const gsl_rng_type *T;
  gsl_rng *r;
  gsl_monte_function G = { *integrand, dim, 0 };
  gsl_rng_env_setup ();
  T = gsl_rng_default;
  r = gsl_rng_alloc (T);
  if (mode==1){
    gsl_monte_plain_state *s = gsl_monte_plain_alloc (dim);
    gsl_monte_plain_integrate (&G, xl, xu, dim, calls, r, s,
                               &res, &err);
    gsl_monte_plain_free (s);
    /* display_results ("plain", res, err); */
  }
  if (mode==2){
    gsl_monte_miser_state *s = gsl_monte_miser_alloc (dim);
    gsl_monte_miser_integrate (&G, xl, xu, dim, calls, r, s,
                               &res, &err);
    gsl_monte_miser_free (s);
    /* display_results ("miser", res, err); */
  }
  if (mode==3){
    gsl_monte_vegas_state *s = gsl_monte_vegas_alloc (dim);
    gsl_monte_vegas_integrate (&G, xl, xu, dim, 10000, r, s,
                               &res, &err);
    /* display_results ("vegas warm-up", res, err); */
    /* printf ("converging...n"); */
    do
      {
        gsl_monte_vegas_integrate (&G, xl, xu, dim, calls/5, r, s,
                                   &res, &err);
        /* printf ("result = % .6f sigma = % .6f "
                "chisq/dof = %.1fn", res, err, gsl_monte_vegas_chisq (s)); */
      }
    while (fabs (gsl_monte_vegas_chisq (s) - 1.0) > 0.5);
    /* display_results ("vegas final", res, err); */
    gsl_monte_vegas_free (s);
  }
  gsl_rng_free (r);
  *retRes = res;
  *retErr = err;
  return 0;
}

Notice how nothing was specifically changed to be “Julia-style”, this is all normal C-code. So how do I use it? First I need to compile it to a shared library. This means I use the -shared tag and save it to a .so. I will need the GSL libraries loaded when compiling, so I add the -lgsl, -lgslcblas, and -lm (math) tags to link those libraries. Then I add the -fPIC tag since Julia’s documentation says so, and tell it the file to compile. The complete code is:

gcc -Wall -shared -o libMonte.so -lgsl -lgslcblas -lm -fPIC monteCarloExample.c

This will give you a .so file. Now we need to use it. I will describe passing in the function last. Here’s the setup. The first of the other variables are the two pointers retRes and retErr where the results will be saved. In Julia, to get a pointer, I make two 1-element arrays as follows:

x = Array{Float64}(1)
y = Array{Float64}(1)

The only other peculiar thing I had to do was to change pi from Julia’s irrational type to a Float64 (Cdouble) and use that to make the array. So for the other variables I used:

fpi = convert(Float64,pi)
xl = [0.; 0.; 0.]
xu = [fpi; fpi; fpi]
calls = 500000
mode = 3
dim = 3

Now I pass this all to C as follows:

ccall((:monte_carlo_integrate,"/path/to/library/libMonte.so"),Int32,(Ptr{Void},Ptr{Cdouble},Ptr{Cdouble},Int32,Int32,Ptr{Cdouble},Ptr{Cdouble},Csize_t),integrand_c,x,y,mode,dim,xl,xu,calls)

The first argument a tuple where the first place is the symbol which says which function in the library to use and the second place is the library name or the path to the library. The second argument is the return type (here it’s Int32 because our function returns int 0 when complete). The next portion is a tuple which defined the types of the arguments we are passing. The first is Ptr{Void} which is used for things like function pointers. Next there are two Cdouble pointers for retRes and retErr. Then two integers, two more pointers, and a Csize_t. Lastly we put in all of the variables we want to pass in.

Recall that the result was stored into the pointers for the second and third variable passed in. These are the pointers x and y. So to get the values of res and err, I de-reference them:

res=x[1]
err=y[1]

Now res and err hold the values for the solution and the error.

I am not done yet since I didn’t talk about the function! To do this, we define the function in Julia. We have to use parametric types or Julia will yet at us, and we have to ensure that the returned value is a Cdouble (in our case). So we re-write the g function in Julia as:

function integrand{T,T2,T3}(x::T,dim::T2,params::T3)
  A = 1.0 / (pi * pi * pi)
  return A / (1.0 - cos(unsafe_load(x,1))*cos(unsafe_load(x,2))*cos(unsafe_load(x,3)))::Cdouble
end

Notice that instead of x[1], we have to use the Julia function unsafe_load(x,1) to de-reference a pointer. However, since this part is in Julia, other things are much safer, like how we can use pi directly without having to convert it to a float. Also notice that we can add print statements in this function, and they will print directly to Julia. You can use this to modify the display_results function to be a Julia function which prints. However, this itself is still not able to be passed into C. To do that, you have to translate it to a C function:

integrand_c = cfunction(integrand,Cdouble,(Ptr{Cdouble},Ptr{Cdouble},Ptr{Cdouble}))

Here we used the Julia cfunction function where the first argument is the function, the second argument is what the function returns, and the third argument is a tuple of C-types that the function will take on. If you look back at the ccall, this integrand_c is what we passed as the first argument to the actual C-function.

Check to see that this all works. It worked for me. For completeness I will put the full Julia code below. Happy programming!

x = Array{Float64}(1)
y = Array{Float64}(1)
fpi = convert(Float64,pi)
xl = [0.; 0.; 0.]
xu = [fpi; fpi; fpi]
 
function integrand{T,T2,T3}(x::T,dim::T2,params::T3)
  A = 1.0 / (pi * pi * pi)
  return 3*A / (1.0 - cos(unsafe_load(x,1))*cos(unsafe_load(x,2))*cos(unsafe_load(x,3)))::Cdouble
end
 
integrand_c = cfunction(integrand,Cdouble,(Ptr{Cdouble},Ptr{Cdouble},Ptr{Cdouble}))
 
calls = 500000
mode = 3
dim = 3
ccall((:monte_carlo_integrate,"/home/crackauc/Public/libMonte.so"),Int32,(Ptr{Void},Ptr{Cdouble},Ptr{Cdouble},Int32,Int32,Ptr{Cdouble},Ptr{Cdouble},Csize_t),integrand_c,x,y,mode,dim,xl,xu,calls)
res=x[1]
err=y[1]

The post Using Julia’s C Interface to Utilize C Libraries appeared first on Stochastic Lifestyle.

#MonthOfJulia Day 25: Interfacing with Other Languages

By: Andrew Collier

Re-posted from: http://www.exegetic.biz/blog/2015/09/monthofjulia-day-25-other-languages/

Julia-Logo-Other-Languages

Julia has native support for calling C and FORTRAN functions. There are also add on packages which provide interfaces to C++, R and Python. We’ll have a brief look at the support for C and R here. Further details on these and the other supported languages can be found on github.

Why would you want to call other languages from within Julia? Here are a couple of reasons:

  • to access functionality which is not implemented in Julia;
  • to exploit some efficiency associated with another language.

The second reason should apply relatively seldom because, as we saw some time ago, Julia provides performance which rivals native C or FORTRAN code.

C

C functions are called via ccall(), where the name of the C function and the library it lives in are passed as a tuple in the first argument, followed by the return type of the function and the types of the function arguments, and finally the arguments themselves. It’s a bit klunky, but it works!

julia> ccall((:sqrt, "libm"), Float64, (Float64,), 64.0)
8.0

It makes sense to wrap a call like that in a native Julia function.

julia> csqrt(x) = ccall((:sqrt, "libm"), Float64, (Float64,), x);
julia> csqrt(64.0)
8.0

This function will not be vectorised by default (just try call csqrt() on a vector!), but it’s a simple matter to produce a vectorised version using the @vectorize_1arg macro.

julia> @vectorize_1arg Real csqrt;
julia> methods(csqrt)
# 4 methods for generic function "csqrt":
csqrt{T<:Real}(::AbstractArray{T<:Real,1}) at operators.jl:359
csqrt{T<:Real}(::AbstractArray{T<:Real,2}) at operators.jl:360
csqrt{T<:Real}(::AbstractArray{T<:Real,N}) at operators.jl:362
csqrt(x) at none:6

Note that a few extra specialised methods have been introduced and now calling csqrt() on a vector works perfectly.

julia> csqrt([1, 4, 9, 16])
4-element Array{Float64,1}:
 1.0
 2.0
 3.0
 4.0

R

I’ll freely admit that I don’t dabble in C too often these days. R, on the other hand, is a daily workhorse. So being able to import R functionality into Julia is very appealing. The first thing that we need to do is load up a few packages, the most important of which is RCall. There’s great documentation for the package here.

julia> using RCall
julia> using DataArrays, DataFrames

We immediately have access to R’s builtin data sets and we can display them using rprint().

julia> rprint(:HairEyeColor)
, , Sex = Male

       Eye
Hair    Brown Blue Hazel Green
  Black    32   11    10     3
  Brown    53   50    25    15
  Red      10   10     7     7
  Blond     3   30     5     8

, , Sex = Female

       Eye
Hair    Brown Blue Hazel Green
  Black    36    9     5     2
  Brown    66   34    29    14
  Red      16    7     7     7
  Blond     4   64     5     8

We can also copy those data across from R to Julia.

julia> airquality = DataFrame(:airquality);
julia> head(airquality)
6x6 DataFrame
| Row | Ozone | Solar.R | Wind | Temp | Month | Day |
|-----|-------|---------|------|------|-------|-----|
| 1   | 41    | 190     | 7.4  | 67   | 5     | 1   |
| 2   | 36    | 118     | 8.0  | 72   | 5     | 2   |
| 3   | 12    | 149     | 12.6 | 74   | 5     | 3   |
| 4   | 18    | 313     | 11.5 | 62   | 5     | 4   |
| 5   | NA    | NA      | 14.3 | 56   | 5     | 5   |
| 6   | 28    | NA      | 14.9 | 66   | 5     | 6   |

rcopy() provides a high-level interface to function calls in R.

julia> rcopy("runif(3)")
3-element Array{Float64,1}:
 0.752226
 0.683104
 0.290194

However, for some complex objects there is no simple way to translate between R and Julia, and in these cases rcopy() fails. We can see in the case below that the object of class lm returned by lm() does not diffuse intact across the R-Julia membrane.

julia> "fit <- lm(bwt ~ ., data = MASS::birthwt)" |> rcopy
ERROR: `rcopy` has no method matching rcopy(::LangSxp)
 in rcopy at no file
 in map_to! at abstractarray.jl:1311
 in map_to! at abstractarray.jl:1320
 in map at abstractarray.jl:1331
 in rcopy at /home/colliera/.julia/v0.3/RCall/src/sexp.jl:131
 in rcopy at /home/colliera/.julia/v0.3/RCall/src/iface.jl:35
 in |> at operators.jl:178

But the call to lm() was successful and we can still look at the results.

julia> rprint(:fit)

Call:
lm(formula = bwt ~ ., data = MASS::birthwt)

Coefficients:
(Intercept)          low          age          lwt         race  
    3612.51     -1131.22        -6.25         1.05      -100.90  
      smoke          ptl           ht           ui          ftv  
    -174.12        81.34      -181.95      -336.78        -7.58 

You can use R to generate plots with either the base functionality or that provided by libraries like ggplot2 or lattice.

julia> reval("plot(1:10)");                # Will pop up a graphics window...
julia> reval("library(ggplot2)");
julia> rprint("ggplot(MASS::birthwt, aes(x = age, y = bwt)) + geom_point() + theme_classic()")
julia> reval("dev.off()")                  # ... and close the window.

Watch the videos below for some other perspectives on multi-language programming with Julia. Also check out the complete code for today (including examples with C++, FORTRAN and Python) on github.



The post #MonthOfJulia Day 25: Interfacing with Other Languages appeared first on Exegetic Analytics.

Speed Expectations for Julia

By: Simon Danisch

Re-posted from: http://randomfantasies.com/2015/05/speed-expectations-for-julia/

In this blog post I want to analyse the speed of Julia a little bit.
It is a very tedious task to write representative benchmarks for a programming language.
The only way out is to rely on a multitude of sources and try to find analytical arguments.
Julia’s own benchmark suite will be used in addition to two other benchmarks that I’ve found to be relevant.
In addition, the general compiler structure of Julia will be shortly analyzed to find indicators for Julia’s overall performance.
juliabench
(data from julialang)
In this first benchmark we can see that Julia stays well within the range of C Speed.
Actually, it even comes second to C-speed with no other language being that close.

Adding to this, Julia also offers a concise and high-level coding style. This unique combination of conciseness and speed is well illustrated in this graphic:

julia_codevstime

(different view of the data from julialang)

This is a very promising first look at Julia, but it should be noted, that these benchmarks are mainly written by the Julia core team.
So it is not guaranteed, that there is not an (unintentional) bias favoring Julia in these Benchmarks.

There is another benchmark comparing C++, Julia and F#, which was created by Palladium Consulting which should not have any interest in favoring one of the languages.
They compare the performance of C++, Julia and F# for an IBM/370 floating point to IEEE floating point conversion algorithm. This is part of a blog series written by Palladium Consulting.
F# comes out last with 748.275 ms, than Julia with 483.769 ms and finally C++ with 463.474 ms.
At the citation time, the Author had updated the C++ version to achieve 388.668 ms.
It does not say if the author put additional time into making the other versions faster as well, so it can not be said that the other versions could not have been made faster too.

The last Julia benchmark is more real world oriented.
It is comparing Finite Element solver, which is an often used algorithm in material research and therefore represents a relevant use case for Julia.

N Julia FEniCS(Python + C++) FreeFem++(C++)
121 0.99 0.67 0.01
2601 1.07 0.76 0.05
10201 1.37 1.00 0.23
40401 2.63 2.09 1.05
123201 6.29 5.88 4.03
251001 12.28 12.16 9.09

(taken from codeproject.)
These are remarkable results, considering that the author states it was not a big effort to achieve this. After all, the other libraries are established FEM solvers written in C++, which should not be easy to compete with.

This list could go on, but it is more constructive to find out Julia’s limits analytically.
Julia’s compilation model can be described as statically compiled at run-time. This means, as long as all types can be inferred at run-time, Julia will have in the most cases identical performance to C++. (See Julia’s performance tips, to get an idea of what needs to be done in order to achieve this)
The biggest remaining difference in this case will be the garbage collection.

Julia 0.3 has a mark and sweep garbage collector, while Julia 0.4 has an incremental garbage collector.
As seen in the benchmarks, it does not necessarily introduce big slowdowns.
But there are issues, where garbage collection introduces a significant slow down.
Analyzing this further is not in the scope of this blog post, though.
But it can be said that Julia’s garbage collector is very young and only the future will show how big the actual differences will be.

Another big difference is the difference in between different compiler technologies.
One of LLVM’s biggest alternatives is gcc, which offers similar functionality to LLVM’s C/C++ language front-end Clang.

If C++ code that is compiled with gcc is much faster than the same code compiled with LLVM-Clang, the gcc version will also be faster as a comparable Julia program.
In order to investigate the impact of this, one last benchmark series will be analyzed.
This is a summary of a series of articles posted on Phoronix, which bench marked gcc 4.92 against LLVM 3.5 and LLVM 3.5 against LLVM 3.6.

Statistic Speedup gcc 4.9 vs LLVM 3.5  Speedup LLVM 3.6 vs 3.5
mean 0.99 0.99
median 0.97 1.00
maximum 1.48 1.10
minimum 0.39 0.88

Bigger is better for LLVM.

Sources: gcc vs LLVMLLVM 3.5 vs LLVM 3.6

The results suggest, that LLVM is well in the range of gcc, even though that there can be big differences between the two.
These are promising results, especially if you consider that LLVM is much younger than gcc.
With big companies like Apple, Google, AMD, Nvidia and Microsoft being invested in LLVM, it is to be expected that LLVM will stay competitive.

So Julia should also stay competitive as it directly profits from any developments in LLVM.

This makes Julia a relatively save bet for the future!