Tag Archives: Python

Programmers’ preferences for package names

By: Dheepak Krishnamurthy

Re-posted from: https://blog.kdheepak.com/programmers-preferences-for-package-names/index.html

Are there trends in choosing package names in various programming ecosystems? Do package authors choose names for their packages that are alliterated with the name of the programming language? Let’s venture to find out.

Can Python with Julia be faster than low-level code?

By: Abel Soares Siqueira

Re-posted from: https://blog.esciencecenter.nl/can-python-with-julia-be-faster-than-low-level-code-cd71a72fbcf4?source=rss----ab3660314556--julia

Part 3 of the series on achieving high performance with high-level code

By Abel Soares Siqueira and Faruk Diblen

Here comes a new challenger: It is Julia. Photo by Joran Quinten on Unsplash (https://unsplash.com/photos/MR9xsNWVKvo), modified by us.

Introduction

In our last post, we were able to improve Python code using a few lines of Julia code. We were able to achieve a very interesting result without optimizing prematurely or using low-level code. However, what if we want more? In this blog post, we will investigate that.

It is quite common that a developer prototypes with a high-level language, but when the need for speed arises, they eventually move to a low-level language. This is called the “two-language problem”, and Julia was created with the objective of solving this issue (read more on their blog post from 2012). Unfortunately, achieving the desired speedup is not always easy. It depends highly on the problem, and on how much previous work was done trying to tackle it. Today we find out how much more we can speed up our Julia code, and how much effort it took.

Previously

  • Patrick Bos presented the problem of reading irregular data, or non-tabular data, in this blog post.
  • He also presented his original solution to the problem using just Python with pandas, which we are calling Pure Python in our benchmarks.
  • Finally, he presented a faster strategy which consisits of calling C++ from Python, which we denote C++.
  • In the previous blog post of this series, we created two strategies with Python calling Julia code. Our first strategy, Basic Julia, wasn’t that great, but our second strategy, Prealloc Julia, was sufficiently faster than Pure Python, but not as fast as C++.

Remember that we have set up a GitHub repository with our whole code, and also, that we have a Docker image for reproducibility.

For the C fans

Our first approach to speeding things up is to simulate what C++ is doing. We believe that the C++ version is faster because it can read the data directly as the desired data type. In Julia, we had to read the data as String and then convert it to Int. We don’t know how to do that with Julia. But we know how to do that with C.

Using Julia’s built-in ccall function, we can directly call the C functions to open and close a file, namely fopen and fclose, and call fscanf to read and parse the file at the same time. Our updated Julia code which uses these C functions is below.

Let’s see if that helped increase the speed of our code. We include in our benchmark the previous strategies as well. This new strategy will be called Julia + C parsing.

Run time of Pure Python, C++, Basic Julia, Prealloc Julia, and Julia + C parsing strategies. (a) Time per element in the log-log scale. (b) Time per element, relative to the time of the C++ version in the log-log scale.

Our code is much more C-like now, so understanding it requires more knowledge about how C works. However, the code is way faster than our previous implementation. For files with more than 1 million elements, the Julia + C parsing strategy has a 10.38 speedup over the Pure Python strategy, on average. This is almost double the speedup we got with Prealloc Julia, which is an amazing result. For comparison, on average, C++ has a 16.37 speedup.

No C for me, thanks

Our C approach was very fast, and we would like to replicate it with pure Julia. Unfortunately, we could not find anything in Julia to perform the same type of reading as fscanf. However, after some investigation, we found an alternative.

Using the read function of Julia, we can parse the file as a stream of bytes. This way we can manually walk through the file and parse the integers. This is the code:

We denote this strategy Optimized Julia. This version of the code manually keeps track of the sequence of bytes related to integers, so it is much less readable. However, this version achieves an impressive speedup, surpassing the C++ version:

Run time of Pure Python, C++, Basic Julia, Prealloc Julia, Julia + C parsing, and Optimized Julia strategies. (a) Time per element in the log-log scale. (b) Time per element, relative to the time of the C++ version in the log-log scale.

It was not easy to get to this point, and the code itself is convoluted, but we managed to achieve a large speedup in relation to Python using only Julia, another high-level language. The average speedup for files with over 1 million elements is 40.25, which is over 2 times faster than what we got with the C++ strategy. We remark again that the Pure Python and C++ strategies have not been optimized, and that readers can let us know in the comments if they found a better strategy.

So yes, we can achieve a speedup equivalent to a low-level language using Julia.

Conclusions: We won, but at what cost?

One thing to keep in mind is that to achieve high speedups, we had to put more effort into getting to that point. This effort comes in diverse ways:

  • To write and use the C++ strategy, we had to know sufficient C++, as well as understand the libraries used. If you don’t have enough C++ knowledge, the effort is higher, since what needs to be done is quite different from what Python developers are used to. If you already know C++, then the effort is that of searching the right keywords and using the right libraries.
  • To write and use any of the Julia strategies, you need to put some effort into having the correct environment. Using Julia from Python is still an experimental feature, so your experience may vary.
  • To write the Basic Julia and Prealloc Julia strategies, not much previous knowledge is required. So, we can classify this as a small effort.
  • To write the Julia + C and Optimized Julia strategies, we need more specialized knowledge. This is again a high-effort task if you do not already know the language.

Here’s our conclusion. To achieve a high speedup, we need specialized knowledge which requires a big effort. However, we can conclude as well that, if you are not familiar with either C++ or Julia, then acquiring some knowledge in Julia allows you to get a smaller improvement. That is, a small effort with Julia already gets you some speedup. You can prototype quickly in Julia and get a reasonable result and keep improving that version to get C-like speedups over time.

Speedup gain relative to the effort of moving the code to a different language.

We hope you have enjoyed the series and that it helps you with your code in any way. Let us know what you think and what you missed. Follow us for more research software content.

Many thanks to our proofreaders and reviewers, Elena Ranguelova, Jason Maassen, Jurrian Spaaks, Patrick Bos, Rob van Nieuwpoort, and Stefan Verhoeven.


Can Python with Julia be faster than low-level code? was originally published in Netherlands eScience Center on Medium, where people are continuing the conversation by highlighting and responding to this story.

Speed up your Python code using Julia

By: Abel Soares Siqueira

Re-posted from: https://blog.esciencecenter.nl/speed-up-your-python-code-using-julia-f97a6c155630?source=rss----ab3660314556--julia

Part two of the series on achieving high performance with high-level code

By Abel Soares Siqueira and Faruk Diblen

Python holds the steering wheel, but we can make it faster with other languages. Photo by Spencer Davis on Unsplash (https://unsplash.com/photos/QUfxuCqdpH0), modified by us.

In part 1 of this series, we set up an environment so that we can run Julia code in Python. You can also check our Docker image with the complete environment if you want to follow along. We also have a GitHub repository with the complete code if you want to see the result.

Background

On the blog post, 50 times faster data loading for Pandas: no problem, our colleague and Senior Research Software Engineer, Patrick Bos, discoursed about improving the speed of reading non-tabular data into a DataFrame in Python. Since the data is not tabular, one must read, split, and stack the data. All of that can be done with pandas in a few lines of code. However, since the data files are large, performance issues with Python and Pandas now become visible and prohibitive. So, instead of doing all those operations with pandas, Patrick shows a nice way of doing it with C++ and Python bindings. Well done, Patrick!

In this blog post, we will look into improving the Python code in a similar fashion. However, instead of moving to C++, a low-level language considerably harder to learn than Python, we will move the heavy lifting to Julia and compare the results.

A very short summary of Patrick’s blog post

Before anything, we recommend checking Patrick’s blog post to read more into the problem, the data, and the approach of using Python with C++. The short version is that we have a file where each row is an integer, followed by the character #, followed by an unknown number of comma-separated values, which we call elements. Each row can have a different number of elements, and that’s why we say the data is non-tabular, or irregular. An example file is below:

From now on, we refer to the initial approach of solving the problem with Python and pandas as the Pure Python strategy, and we will call the strategy of solving the problem with Python and C++ as the C++ strategy.

We will compare the strategies using a dataset we generated. The dataset has 180 files, generated randomly, varying the number of rows, the maximum number of elements per row, and the distribution of the number of elements per row.

Adding some Julia spice to Python

The version below is the first approach to solve our problem using Julia. There are shorter alternatives, but this one is sufficiently descriptive. We start with a very basic approach so it is easier to digest.

You can test this function on Julia directly to see that it works independently of Python. After doing that, we want to call it from Python. As you should know by now, that is fairly easy to do, especially if you use the Docker image we have created for Post 1.

The next code snippet includes the file that we created above into Julia’s Main namespace and defines two functions in Python. The first, load_external , is used to read the arrays that were parsed by either C++ or Julia. The second Python function, read_arrays_julia_basic , is just a wrapper around the Julia function definition in the included file.

Now we will benchmark this strategy, which we will call the Basic Julia strategy, against the Pure Python and C++ strategies. We are using Python 3.10.1 and Julia 1.6.5. We run each strategy three times and take the average time. Our hardware is a Notebook Dell Precision 5530, with 16 GB of RAM and an i7–8850H CPU, and we are using a docker image based on Ubuntu Linux 21.10 to run the tests (from inside another Linux machine). You can reproduce the results by pulling the abelsiqueira/faster-python-with-julia-blogpost Docker image, downloading the dataset, and running the following command in your terminal:

$ docker run --rm --volume "$PWD/dataset:/app/dataset" --volume "$PWD/out:/app/out" abelsiqueira/faster-python-with-julia-post2

See the figure below for the results.

Run time of Pure Python, C++, and Basic Julia strategies. (a) Time per element in the log-log scale. (b) Time per element, relative to the time of the C++ strategy in the log-log scale.

A few interesting things happen in the image. First, both Pure Python and Basic Julia have a lot of variability with respect to the number of elements. We believe this happens because the code’s performance is dependent on the number of rows, as well as the structure distribution of elements per row. The code allocates a new array for each row, so even if the number of elements is small, if the number of rows is large, then the execution will be slow. Remember that our dataset has a lot of variability on the number of rows, maximum elements per row, and distribution of elements per row. This means that some files are close in the number of elements but may be vastly different. Second, Basic Julia and Pure Python have different efficiency profiles. Our Julia code must move all stored elements into a new array for each new row that it reads, meaning it allocates a new array for every row.

The code for Basic Julia is simple and does what is expected, but it does not pre-allocate the memory that will be used, so that really hurts its performance. In low-level languages, that would be one of the first things we would have to worry about. Indeed, if we look into the C++ code, we can see that it starts by figuring out the size of the output vector and allocating them. We need to improve our Julia code at least a little bit.

Basic improvements for the Julia Code

The first version of our Julia code is inefficient in a few ways, as explained above. With that in mind, our first change is to compute the number of elements a priori and allocate our output vectors. Here is our improved Julia code:

Here, we use a dictionary generator comprehension, which has the closest resemblance to the data. This allows us to count the number of elements and keep the values to be stored later. We also use the package Parsers, which provides a slightly faster parser for integers. Here is the updated figure comparing the three previous strategies and the new Prealloc Julia strategy that we just created:

Run time of the Pure Python, C++, Basic Julia, and Prealloc Julia strategies. (a) Time per element in the log-log scale. (b) Time per element, relative to the time of the C++ strategy in the log-log scale.

Now we have made a nice improvement. The results more consistently depend on the number of elements, like the C++ strategy. We can also see a stabilization of the trend that Prealloc Julia follows. It appears to be the same as C++, which is expected since the performance should be linearly dependent on the number of elements. For files with more than 1 million elements, the Prealloc Julia strategy has a 5.83 speedup over the Pure Python strategy, on average, while C++ has a 16.37 speedup, on average.

Next steps

We have achieved an amazing result today. Using only high-level languages, we were able to achieve some speedup in relation to the Pure Python strategy. We remark that we have not optimized the Python or the C++ strategies, simply using what was already available from Patrick’s blog post. Let us know in the comments you have optimized versions of these codes to share with the community.

In the next post, we will optimize our Julia code even further. It is said that Julia’s speed sometimes rivals low-level code. Can we achieve that for our code? Let us know what you think and stay tuned for more!

Many thanks to our proofreaders and reviewers, Elena Ranguelova, Jason Maassen, Jurrian Spaaks, Patrick Bos, Rob van Nieuwpoort, and Stefan Verhoeven.


Speed up your Python code using Julia was originally published in Netherlands eScience Center on Medium, where people are continuing the conversation by highlighting and responding to this story.