Tag Archives: Stochastics

DifferentialEquations.jl 2.0: State of the Ecosystem

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/differentialequations-jl-2-0-state-ecosystem/

In this blog post I want to summarize what we have accomplished with DifferentialEquations’ 2.0 release and detail where we are going next. I want to put the design changes and development work into a larger context so that way everyone can better understand what has been achieved, and better understand how we are planning to tackle our next challenges.

If you find this project interesting and would like to support our work, please star our Github repository. Thanks!

Now let’s get started.

DifferentialEquations.jl 1.0: The Core

Before we start talking about 2.0, let’s understand first what 1.0 was all about. DifferentialEquations.jl 1.0 was about answering a single question: how can we put the wide array of differential equations into one simple and efficient interface. The result of this was the common interface explained in the first blog post. Essentially, we created one interface that could:

  1. Specify a differential equation problem
  2. Solve a differential equation problem
  3. Analyze a differential equation problem

The problem types, solve command, and solution interface were all introduced here as part of the unification of differential equations. Here, most of the work was on developing the core. DifferentialEquations.jl 1.0 was about having the core methods for solving ordinary differential equations, stochastic differential equations, and differential algebraic equations. There were some nice benchmarks to show that our core native solvers were on the right track, even besting well-known Fortran methods in terms of efficiency, but the key of 1.0 was the establishment of this high level unified interface and the core libraries for solving the problems.

DifferentialEquations.jl 2.0: Extended Capabilities

DifferentialEquations.jl 2.0 asked a very unique question for differential equations libraries. Namely, “how flexible can a differential equations solver be?”. This was motivated by an off-putting remark where someone noted that standard differential equations solvers were limited in their usefulness because many of the higher level analyses that people need to do cannot be done with a standard differential equations solver.

So okay, then we won’t be a standard differential equations solver. But what do we need to do to make all of this possible? I gathered a list of things which were considered impossible to do with “blackbox” differential equations solvers. People want to model continuous equations for protein concentrations inside of each cell, but allow the number of cells (and thus the number of differential equations) to change stochastically over time. People want to model multiscale phenomena, and have discontinuities. Some “differential equations” may only be discontinuous changes of discrete values (like in Gillespie models). People want to solve equations with colored noise, and re-use the same noise process in other calculations. People want to solve the same ODE efficiently hundreds of times, and estimate parameters. People want to quantify the uncertainty and the sensitivity of their model. People want their solutions conserve properties like energy.

People want to make simulations of reality moreso than solve equations.

And this became the goal for DifferentialEquations.jl 2.0. But the sights were actually set a little higher. The underlying question was:

How do you design a differential equations suite such that it can have this “simulation engine” functionality, but also such that adding new methods automatically makes the method compatible with all of these features?

That is DifferentialEquations.jl 2.0. the previous DifferentialEquations.jl ecosystem blog post details the strategies we were going to employ to achieve this goal, but let me take a little bit of time to explain the solution that eventually resulted.

The Integrator Interface

The core of the solution is the integrator interface. Instead of just having an interface on the high-level solve command, the integrator interface is the interface on the core type. Everything inside of the OrdinaryDiffEq.jl, StochasticDiffEq.jl, DelayDiffEq.jl packages (will be referred to as the *DiffEq solvers) is actually just a function on the integrator type. This means that anything that the solver can do, you can do by simply having access to the integrator type. Then, everything can be unified by documenting this interface.

This is a powerful idea. It makes development easy, since the devdocs just explain what is done internally to the integrator. Adding new differential equations algorithms is now simply adding a new perform_step dispatch. But this isn’t just useful for development, this is useful for users too. Using the integrator, you can step one at a time if you wanted, and do anything you want between steps. Resizing the differential equation is now just a function on the integrator type since this type holds all of the cache variables. Adding discontinuities is just changing integrator.u.

But the key that makes this all work is Julia. In my dark past, I wrote some algorithms which used R’s S3 objects, and I used objects in numerical Python codes. Needless to say, these got in the way of performance. However, the process of implementing the integrator type was a full refactor from straight loops to the type format. The result was no performance loss (actually, there was a very slight performance gain!). The abstraction that I wanted to use did not have a performance tradeoff because Julia’s type system optimized its usage. I find that fact incredible.

But back to the main story, the event handling framework was re-built in order to take full advantage of the integrator interface, allowing the user to directly affect the integrator. This means that doubling the size of your differential equation the moment some value hits 1 is now a possibility. It also means you can cause your integration to terminate when “all of the bunnies” die. But this became useful enough that you might not want to just use it for traditional event handling (aka cause some effect when some function hits zero, which we call the ContinuousCallback), but you may just want to apply some affect after steps. The DiscreteCallback allows one to check a boolean function for true/false, and if true apply some function to the integrator. For example, we can use this to always apply a projection to a manifold at the end of each step, effectively preserving the order of the integration while also conserving model properties like energy or angular momentum.

The integrator interface and thus its usage in callbacks then became a way that users could add arbitrary functionality. It’s useful enough that a DiscreteProblem (an ODE problem with no ODE!) is now a thing. All that is done is the discrete problem walks through the ODE solver without solving a differential equation, just hitting callbacks.

But entirely new sets of equations could be added through callbacks. For example, discrete stochastic equations (or Gillespie simulations) are models where rate equations determine the time to the next discontinuity or “jump”. The JumpProblem types simply add callbacks to a differential (or discrete) equation that perform these jumps at specific rates. This effectively turns the “blank ODE solver” into an equation which can solve these models of discrete proteins stochastically changing their levels over time. In addition, since it’s built directly onto the differential equations solvers, mixing these types of models is an instant side effect. These models which mix jumps and differential equations, such as jump diffusions, were an immediate consequence of this design.

The design of the integrator interface meant that dynamicness of the differential equation (changing the size, the solver options, or any other property in the middle of solving) was successfully implemented, and handling of equations with discontinuities directly followed. This turned a lot of “not differential equations” into “models and simulations which can be handled by the same DifferentialEquations.jl interface”.

Generic algorithms over abstract types

However, the next big problem was being able to represent a wider array of models. “Models and simulations which do weird non-differential equation things over time” are handled by the integrator interface, but “weird things which aren’t just a system of equations which do weird non-differential equation things over time” were still out of reach.

The solution here is abstract typing. The *DiffEq solvers accept two basic formats. Let’s stick to ODEs for the explanation. For ODEs, there is the out-of-place format

du = f(t,u)

where the derivative/change is returned by the function, and there is the in-place format

f(t,u,du)

where the function modifies the object du which stores the derivative/change. Both of these formats were generalized to the extreme. In the end, the requirements for a type to work in the out-of-place format can be described as the ability to do basic arithmetic (+,-,/,*), and you add the requirement of having a linear index (or simply having a broadcast! function defined) in order to satisfy the in-place format. If the method is using adaptivity, the user can pass an appropriate norm function to be used for calculating the norm of the error estimate.

This means that wild things work in the ODE solvers. I have already demonstrated arbitrary precision numbers, and unit-checked arithmetic.

But now there’s even crazier. Now different parts of your equation can have different units using the ArrayPartition. You can store and update discrete values along with your differential equation using the DEDataArray type. Just the other day I showed this can be used to solve problems where the variable is actually a symbolic mathematical expression. We are in the late stages of getting a type which represents a spectral discretization of a function compatible with the *DiffEq solvers.

But what about those “purely scientific non-differential equations” applications? A multiscale model of an embryo which has tissues, each with different populations of cells, and modeling the proteins in each cell? That’s just a standard application of the AbstractMultiScaleArray.

Thus using the abstract typing, even simulations which don’t look like systems of equations can now be directly handled by DifferentialEquations.jl. But not only that, since this is done simply via Julia’s generic programming, this compatibility is true for any of the new methods which are added (one caveat: if they use an external library like ForwardDiff.jl, their compatibility is limited by the compatibility of that external library).

Refinement of the problem types

The last two big ideas made it possible for a very large set of problems to be written down as a “differential equation on an array” in a much expanded sense of the term. However, there was another design problem to solve: not every algorithm could be implemented with “the information” we had! What I mean by “information”, I mean the information we could get from the user. The ODEProblem type specified an ODE as

 \frac{du}{dt} = f(t,u)

but some algorithms do special things. For example, for the ODE

 \frac{du}{dt} = f(t,u) = A + g(t,u)

the Lawson-Euler algorithm for solving the differential equation is

 u_{n+1} = \exp(A \Delta t)(u_n + g(t,u_n)\Delta t)

This method exploits the fact that it knows that the first part of the equation is A for some matrix, and uses it directly to improve the stability of the algorithm. However, if all we know is f, we could never implement this algorithm. This would violate our goal of “full flexibility at full performance” if this algorithm was the most efficient for the problem!

The solution is to have a more refined set of problem types. I discussed this a bit at the end of the previous blog post that we could define things like splitting problems. The solution is quite general, where

 M \frac{du}{dt} = f_1(t,u) + f_2(t,u) + ... + f_n(t,u)

can be defined using the SplitODEProblem (M being a mass matrix). Then specific methods can request specific forms, like here the linear-nonlinear ODE. Together, the ODE solver can implement this algorithm for the ODE, and that implementation, being part of a *DiffEq solver, will have interpolations, the integrator interface, event handling, abstract type compatibility, etc. all for free. Check out the other “refined problem types”: these are capable of covering wild things like staggered grid PDE methods and symplectic integrators.

In addition to specifying the same equations in new ways, we created avenues for common analyses of differential equations which are not related to simulating them over time. For example, one common problem is to try to find steady states, or points where the differential equation satisfies f(u)=0. This can now easily be done by defining a SteadyStateProblem from an ODE, and then using the steady state solver. This new library will also lead to the implementation of accelerated methods for finding steady states, and the development of new accelerated methods. The steady state behavior can now also be analyzed using the bifurcation analysis tools provided by the wrapper to PyDSTool.

Lastly, the problem types themselves have become much more expressive. In addition to solving the standard ODE, one can specify mass matrices in any appropriate DE type, to instead solve the equation

 M \frac{du}{dt} = f(t,u)

where M is some linear operator (similarly in DDEs and SDEs). While the vast majority of solvers are not able to use M right now, this infrastructure is there for algorithms to support it. In addition, one can now specify the noise process used in random and stochastic equations, allowing the user to solve problems with colored noise. Using the optional fields, a user can define non-diagonal noise problems, and specify sparse noise problems using sparse matrices.

As of now, only some very basic methods using all of this infrastructure have been made for the most extreme examples for testing purposes, but these show that the infrastructure works and this is ready for implementing new methods.

Common solve extensions

Okay, so once we can specify efficient methods for weird models which evolve over time in weird ways, we can simulate and get what solutions look like. Great! We have a tool that can be used to get solutions! But… that’s only the beginning of most analyses!

Most of the time, we are simulating solutions to learn more about the model. If we are modeling chemical reactions, what is the reaction rate that makes the model match the data? How sensitive is our climate model to our choice of the albedo coefficient?

To back out information about the model, we rely on analysis algorithms like parameter estimation and sensitivity analysis. However, the common solve interface acts as the perfect level for implementing these algorithms because they can be done problem and algorithm agnostic. I discuss this in more detail in a previous blog post, but the general idea is that most of these algorithms can be written with a term y(t) which is the solution of a differential equation. Thus we can write the analysis algorithms at a very high level and allow the user to pass in the arguments for a solve command use that to generate the y(t). The result is an implementation of the analysis algorithm which works with any of the problems and methods which use the common interface. Again, chaining all of the work together to get one more complete product. You can see this in full force by looking at the parameter estimation docs.

Modeling Tools

In many cases one is solving differential equations not for their own sake, but to solve scientific questions. To this end, we created a framework for modeling packages which make this easier. The financial models make it easy to specify common financial equations, and the biological models make it easy to specify chemical reaction networks. This functionality all works on the common solver / integrator interface, meaning that models specified in these forms can be used with the full stack and analysis tools. Also, I would like to highlight BioEnergeticFoodWebs.jl as a great modeling package for bio-energetic food web models.

Over time, we hope to continue to grow these modeling tools. The financial tools I hope to link with Julia Computing’s JuliaFin tools (Miletus.jl) in order to make it easy to efficiently solve the SDE and PDE models which result from their financial DSL. In addition, DiffEqPhysics.jl is planned to make it easy to specify the equations of motion just by giving a Hamiltonian or Lagrangian, or by giving the the particles + masses and automatically developing a differential equation. I hope that we can also tackle domains like mechanical systems and pharmacokinetics/pharmacodynamics to continually expand what is easily able to be solved using this infrastructure.

DifferentialEquations 2.0 Conclusion

In the end, DifferentialEquations 2.0 was about finding the right infrastructure such that pretty much anything CAN be specified and solved efficiently. While there were some bumps along the road (that caused breaking API changes), I believe we came up with a very good solution. The result is a foundation which feeds back onto itself, allowing projects like parameter estimation of multiscale models which change size due to events to be standard uses of the ODE solver.

And one of the key things to note is that this follows by design. None of the algorithms were specifically written to make this work. The design of the *DiffEq packages gives interpolation, event handling, compatibility with analysis tools, etc. for free for any algorithm that is implemented in it. One contributor, @ranocha, came to chat in the chatroom and on a few hours later had implemented 3 strong stability preserving Runge-Kutta methods (methods which are efficient for hyperbolic PDEs) in the *DiffEq solvers. All of this extra compatibility followed for free, making it a simple exercise. And that leads me to DifferentialEquations 3.0.

DifferentialEquations 3.0: Stiff solvers, parallel solvers, PDEs, and improved analysis tools

1.0 was about building the core. 2.0 was about making sure that the core packages were built in a way that could be compatible with a wide array of problems, algorithms, and analysis tools. However, in many cases, only the simplest of each type of algorithm was implemented since this was more about building out the capabilities than it was to have completeness in each aspect. But now that we have expanded our capabilities, we need to fill in the details. These details are efficient algorithms in the common problem domains.

Stiff solvers

Let’s start by talking about stiff solvers. As of right now, we have the all of the standard solvers (CVODE, LSODA, radau) wrapped in the packages Sundials.jl, LSODA.jl, and ODEInterface.jl respectively. These can all be used in the DifferentialEquations.jl common interface, meaning that it’s mostly abstracted away from the user that these aren’t actually Julia codes. However, these lower level implementations will never be able to reach the full flexibility of the native Julia solvers simply because they are restricted in the types they use and don’t fully expose their internals. This is fine, since our benchmarks against the standard Runge-Kutta implementations (dopri5, dop853) showed that the native Julia solvers, being more modern implementations, can actually have performance gains over these older methods. But, we need to get our own implementations of these high order stiff solvers.

Currently there exists the Rosenbrock23 method. This method is similar to the MATLAB ode23s method (it is the Order 2/3 Shampine-Rosenbrock method). This method is A and L stable, meaning it’s great for stiff equations. This was thus used for testing event handling, parameter estimation, etc.’s capabilities and restrictions with the coming set of stiff solvers. However, where it lacks is order. As an order 2 method, this method is only efficient at higher error tolerances, and thus for “standard tolerances” it tends not to be competitive with the other methods mentioned before. That is why one of our main goals in DiffEq 3.0 will be the creation of higher order methods for stiff equations.

The main obstacle here will be the creation of a library for making the differentiation easier. There are lots of details involved here. Since a function defined using the macros of ParameterizedFunctions can symbolically differentiate the users function, in some cases a pre-computed function for the inverted or factorized Jacobian can be used to make a stiff method explicit. In other cases, we need autodifferentiation, and in some we need to use numerical differentiation. This is all governed by a system of traits setup behind the scenes, and thus proper logic for determining and using Jacobians can immensely speed up our calculations.

The Rosenbrock23 method did some of this ad-hocly within its own method, but it was determined that the method would be greatly simplified if there was some library that could handle this. In fact, if there was a library to handle this, then the Rosenbrock23 code for defining steps would be as simple as defining steps for explicit RK methods. The same would be true for implicit RK methods like radau. Thus we will be doing that: building a library which handles all of the differentiation logic. The development of this library, DiffEqDiffTools.jl, is @miguelraz ‘s Google Summer of Code project. Thus with the completion of this project (hopefully summer?), efficient and fully compatible high order Rosenbrock methods and implicit RK methods will easily follow. Also included will be additive Runge-Kutta methods (IMEX RK methods) for SplitODEProblems. Since these methods double as native Julia DAE solvers and this code will make the development of stiff solvers for SDEs, this will be a major win to the ecosystem on many fronts.

Stiffness Detection and Switching

In many cases, the user may not know if a problem is stiff. In many cases, especially in stochastic equations, the problem may be switching between being stiff and non-stiff. In these cases, we want to change the method of integration as we go along. The general setup for implementing switching methods has already been implemented by the CompositeAlgorithm. However, current usage of the CompositeAlgorithm requires that the user define the switching behavior. This makes it quite difficult to use.

Instead, we will be building methods which make use of this infrastructure. Stiffness detection estimates can be added to the existing methods (in a very efficient manner), and could be toggled on/off. Then standard switching strategies can be introduced such that the user can just give two algorithms, a stiff and a non-stiff solvers, and basic switching can then occur. What is deemed as the most optimal strategies can then be implemented as standard algorithm choices. Then at the very top, these methods can be added as defaults for solve(prob), making the fully automated solver efficiently handle difficult problems. This will be quite a unique feature and is borderline a new research project. I hope to see some really cool results.

Parallel-in-time ODE/BVP solvers

While traditional methods (Runge-Kutta, multistep) all step one time point at a time, in many cases we want to use parallelism to speed up our problem. It’s not hard to buy an expensive GPU, and a lot of us already have one for optimization, so why not use it?

Well, parallelism for solving differential equations is very difficult. Let’s take a look at some quick examples. In the Euler method, the discretization calculates the next time step u_{n+1} from the previous time step u_n using the equation

u_{n+1} = u_n + \Delta t f(t,u)

In code, this is the update step

u .= uprev .+ dt.*f(t,uprev)

I threw in the .’s to show this is broadcasted over some arrays, i.e. for systems of equations u is a vector. And that’s it, that’s what the inner loop is. The most you can parallelize are the loops internal to the broadcasts. This means that for very large problems, you can parallelize this method efficiently (this form is called parallelism within the method). Also, if your input vector was a GPUArray, this will broadcast using CUDA or OpenCL. However, if your problem is not a sufficiently large vector, this parallelism will not be very efficient.

Similarly for implicit equations, we need to repeatedly solve (I-\Delta tJ)u = b where J is the Jacobian matrix. This linear solve will only parallelize well if the Jacobian matrix is sufficiently large. But many small differential equations problems can still be very difficult. For example, this about solving a very stiff ODE with a few hundred variables. Instead, the issue is that we are stepping serially over time, and we need to use completely different algorithms which parallelize over time.

One of these approaches is a collocation method. Collocation methods build a very large nonlinear equation F(X)=0 which describes a numerical method over all time points at once, and simultaneously solves for all of the time points using a nonlinear solver. Internally, a nonlinear solver is just a linear solver, Ax=b, with a very large A. Thus, if the user passes in a custom linear solver method, say one using PETSc.jl or CUSOLVER, this is parallelize efficiently over many nodes of an HPC or over a GPU. In fact, these methods are the standard methods for Boundary Value Problems (BVPs). The development of these methods is the topic of @YingboMa’s Google Summer of Code project. While written for BVPs, these same methods can then solve IVPs with a small modification (including stochastic differential equations).

By specifying an appropriate preconditioner with the linear solver, these can be some of the most efficient parallel methods. When no good preconditioner is found, these methods can be less efficient. One may wonder then if there’s exists a different approach, one which may sacrifice some “theoretical top performance” in order to be better in the “low user input” case (purely automatic). There is! Another approach to solving the parallelism over time issue is to use a neural network. This is the topic of @akaysh’s Google Summer of Code project. Essentially, you can define a cost function which is the difference between the numerical derivative and f(t_i,u_i) at each time point. This then gives an optimization problem: find the u_i at each time point such that the difference between the numerical and the desired derivatives is minimized. Then you solve that cost function minimization using a neural network. The neat thing here is that neural nets are very efficient to parallelize over GPUs, meaning that even for somewhat small problems we can get out very good parallelism. These neural nets can use very advanced methods from packages like Knet.jl to optimize efficiently and parallel with very little required user input (no preconditioner to set). There really isn’t another standard differential equations solver package which has a method like this, so it’s hard to guess how efficient it will be in advance. But given the properties of this setup, I suspect this should be a very good “automatic” method for medium-sized (100’s of variables) differential equations.

The really great thing about these parallel-in-time methods is that they are inherently implicit, meaning that they can be used even on extremely stiff equations. There are also simple extensions to make these solver SDEs and DDEs. So add this to the bank of efficient methods for stiff diffeqs!

Improved methods for stochastic differential equations

As part of 3.0, the hope is to release brand new methods for stochastic differential equations. These methods will be high order and highly stable, some explicit and some implicit, and will have adaptive timestepping. This is all of the details that I am giving until these methods are published, but I do want to tease that many types of SDEs will become much more efficient to solve.

Improved methods for jump equations

For jump equations, in order to show that everything is complete and can work, we have only implemented the Gillespie method. However, we hope to add many different forms of tau-leaping and Poisson(/Binomial) Runge-Kutta methods for these discrete stochastic equations. Our roadmap is here and it seems there may be a great deal of participation to complete this task. Additionally, we plan on having a specialized DSL for defining chemical reaction networks and automatically turn them into jump problems or ODE/SDE systems.

Geometric and symplectic integrators

In DifferentialEquations.jl 2.0, the ability to Partitioned ODEs for dynamical problems (or directly specify a second order ODE problem) was added. However, only a symplectic Euler method has been added to solve this equations so far. This was used to make sure the *DiffEq solvers were compatible with this infrastructure, and showed that event handling, resizing, parameter estimation, etc. all works together on these new problem types. But, obviously we need more algorithms. Velocity varlet and higher order Nystrom methods are asking to be implemented. This isn’t difficult for the reasons described above, and will be a very nice boost to DifferentialEquations.jl 3.0.

(Stochastic/Delay) Partial differential equations

Oh boy, here’s a big one. Everyone since the dawn of time has wanted me to focus on building a method that makes solving the PDE that they’re interested dead simple to do. We have a plan for how to get there. The design is key: instead of one-by-one implementing numerical methods for each PDE, we needed a way to pool the efforts together and make implementations on one PDE matter for other PDEs.

Let’s take a look at how we can do this for efficient methods for reaction-diffusion equations. In this case, we want to solve the equation

 u_t = \Delta u + f(t,u)

The first step is always to discretize this over space. Each of the different spatial discretization methods (finite difference, finite volume, finite element, spectral), end up with some equation

 U_t = AU + f(t,U)

where now U is a vector of points in space (or discretization of some basis). At this point, a specialized numerical method for stepping this equation efficiently in the time can be written. For example, if diffusion is fast and f is stiff, one really efficient method is the implicit integrating factor method. This would solve the equation by updating time points like:

U_{n+1} = e^{-A\Delta t}U_n + \Delta t U_{n+1}

where we have to solve this implicit equation each time step. The nice thing is that the implicit equation decouples in space, and so we actually only need to solve a bunch of very small implicit equations.

How can we do this in a way that is not “specific to the heat equation”? There were two steps here, the first is discretizing in space, the second is writing an efficient method specific to the PDE. The second part we already have an answer for: this numerical method can be written as one of the methods for linear/nonlinear SplitODEProblems that we defined before. Thus if we just write a SplitODEProblem algorithm that does this form of updating, it can be applied to any ODE (and any PDE discretization) which splits out a linear part. Again, because it’s now using the ODE solver, all of the extra capabilities (event handling, integrator interface, parameter estimation tools, interpolations, etc.) all come for free as well. The development of ODE/SDE/DDE solvers for handling this split, like implicit integrating factor methods and exponential Runge-Kutta methods, is part of DifferentialEquations.jl 3.0’s push for efficient (S/D)PDE solvers.

So with that together, we just need to solve the discretization problem. First let’s talk about finite difference. For the Heat Equation with a fixed grid-size \Delta x, many people know what the second-order discretization matrix A is in advance. However, what if you have variable grid sizes, and want different order discretizations of different derivatives (say a third derivative)? In this case the Fornburg algorithm can be used to construct this A. And instead of making this an actual matrix, because this is sparse we can make this very efficient by creating a “matrix-free type” where AU acts like the appropriate matrix multiplication, but in reality no matrix is ever created. This can save a lot of memory and make the multiplication a lot more efficient by ignoring the zeros. In addition, because of the reduced memory requirement, we easily distribute this operator to the GPU or across the cluster, and make the AU function utilize this parallelism.

The development of these efficient linear operators is the goal of @shivin9’s Google Summer of Code project. The goal is to get a function where the user can simply specify the order of the derivative and the order of the discretization, and it will spit out this highly efficient A to be used in the discretization, turning any PDE into a system of ODEs. In addition, other operators which show up in finite difference discretizations, such as the upwind scheme, can be encapsulated in such an A. Thus this project would make turning these types of PDEs into efficient ODE discretizations much easier!

The other very popular form of spatial discretization is the finite element method. For this form, you chose some basis function over space and discretize the basis function. The definition of this basis function’s discretization then derives what the A discretization of the derivative operators should be. However, there is a vast array of different choices for basis and the discretization. If we wanted to create a toolbox which would implement all of what’s possible, we wouldn’t get anything else done. Thus we will instead, at least for now, piggyback off of the efforts of FEniCS. FEniCS is a toolbox for the finite element element method. Using PyCall, we can build an interface to FEniCS that makes it easy to receive the appropriate A linear operator (usually sparse matrix) that arises from the desired discretization. This, the development of a FEniCS.jl, is the topic of @ysimillides’s Google Summer of Code. The goal is to make this integration seamless, so that way getting ODEs out for finite element discretizations is a painless process, once again reducing the problem to solving ODEs.

The last form of spatial discretization is spectral discretizations. These can be handled very well using the Julia library ApproxFun.jl. All that is required is to make it possible to step in time the equations which can be defined using the ApproxFun types. This is the goal of DiffEqApproxFun.jl. We already have large portions of this working, and for fixed basis lengths the ApproxFunProblems can actually be solved using any ODE solver (not just native Julia solvers, but also methods from Sundials and ODEInterface work!). This will get touched up soon and will be another type of PDE discretization which will be efficient and readily available.

Improved Analysis Tools

What was described above is how we are planning to solve very common difficult problems with high efficiency and simplify the problems for the user, all without losing functionality. However, the tools at the very top of the stack, the analysis tools, can also become much more efficient as well. This is the other focus of DifferentialEquations.jl 3.0.

Local sensitivity analysis is nice because it not only tells you how sensitive your model is to the choice of parameters, but it gives this information at every time point. However, in many cases this is overkill. Also, this makes the problem much more computationally difficult. If we wanted to classify parameter space, like to answer the question “where is the model least sensitive to parameters?”, we would have to solve this equation many times. When this is the question we wish to answer, global sensitivity analysis methods are much more efficient. We plan on adding methods like the Morris method in order for sensitives to be globally classified.

In addition, we really need better parameter estimation functionality. What we have is very good: you can build an objective function for your parameter estimation problem to then use Optim.jl, BlackBoxOptim.jl or any MathProgBase/JuMP solver (example: NLopt.jl) to optimize the parameters. This is great, but it’s very basic. In many cases, more advanced methods are necessary in order to get convergence. Using likelihood functions instead of directly solving the nonlinear regression can often times be more efficient. Also, in many cases statistical methods (the two-stage method) can be used to approximately optimize parameters without solving the differential equations repeatedly, a huge win for performance. Additionally, Bayesian methods will not only give optimal parameters, but distributions for the parameters which the user can use to quantify how certain they are about estimation results. The development of these methods is the focus of @Ayush-iitkgp’s Google Summer of Code project.

DifferentialEquations.jl 3.0 Conclusion

2.0 was about building infrastructure. 3.0 is about filling out that infrastructure and giving you the most efficient methods in each of the common problem domains.

DifferentialEquations.jl 4.0 and beyond

I think 2.0 puts us in a really great position. We have a lot, and the infrastructure allows us to be able to keep expanding and adding more and more algorithms to handle different problem types more efficiently. But there are some things which are not slated in the 3.0 docket. One thing that keeps getting pushed back is the automatic promotion of problem types. For example, if you specified a SplitODEProblem and you want to use an algorithm which wants an ODEProblem (say, a standard Runge-Kutta algorithm like Tsit5), it’s conceivable that this conversion can be handled automatically. Also, it’s conceivable that since you can directly convert an ODEProblem into a SteadyStateProblem, that the steady state solvers should work directly on an ODEProblem as well. However, with development time always being a constraint, I am planning on spending more time developing new efficient methods for these new domain rather than the automatic conversions. However, if someone else is interested in tackling this problem, this could probably be addressed much sooner!

Additionally, there is one large set of algorithms which have not been addressed in the 3.0 plan: multistep methods. I have been holding off on these for a few reasons. For one, we have wrappers to Sundials, DASKR, and LSODA which supply well-known and highly efficient multistep methods. However, these wrappers, having the internals not implemented in Julia, are limited in their functionality. They will never be able to support arbitrary Julia types and will be constrained to equations on contiguous arrays of Float64s. Additionally, the interaction with Julia’s GC makes it extremely difficult to implement the integrator interface and thus event handling (custom allocators would be needed). Also, given what we have seen with other wrappers, we know we can actually improve the efficiency of these methods.

But lastly, and something I think is important, these methods are actually only efficient in a small but important problem domain. When the ODE f is not “expensive enough”, implicit Runge-Kutta and Rosenbrock methods are more efficient. When it’s a discretization of a PDE and there exists a linear operator, exponential Runge-Kutta and implicit integrating factor methods are more efficient. Also, if there are lots of events or other dynamic behavior, multistep methods have to “restart”. This is an expensive process, and so in most cases using a one-step method (any of the previously mentioned methods) is more efficient. This means that multistep methods like Adams and BDF (/NDF) methods are really only the most efficient when you’re talking about a large spatial discretization of a PDE which doesn’t have a linear operator that you can take advantage of and events are non-existent/rare. Don’t get me wrong, this is still a very important case! But, given the large amount of wrappers which handle this quite well already, I am not planning on tackling these until the other parts are completed. Expect the *DiffEq infrastructure to support multistep methods in the future (actually, there’s already quite a bit of support in there, just the adaptive order and adaptive time step needs to be made possible), but don’t count on it in DifferentialEquations 3.0.

Also not part of 3.0 but still of importance is stochastic delay differential equations. The development of a library for handling these equations can follow in the same manner as DelayDiffEq.jl, but likely won’t make it into 3.0 as there are more pressing (more commonly used) topics to address first. In addition, methods for delay equations with non-constant lags (and neutral delay equations) also will likely have to wait for 4.0.

In the planning stages right now is a new domain-specific language for the specification of differential equations. The current DSL, the @ode_def macro in ParameterizedFunctions.jl does great for the problem that it can handle (ODEs and diagonal SDEs). However, there are many good reasons to want to expand the capabilities here. For example, equations defined by this DSL can be symbolically differentiated, leading to extremely efficient code even for stiff problems. In addition, one could theoretically “split the ODE function” to automatically turn the problem in a SplitODEProblem with a stiff and nonstiff part suitable for IMEX solvers. If PDEs also can be written in the same syntax, then the PDEs can be automatically discretized and solved using the tools from 2.0. Additionally, one can think about automatically reducing the index of DAEs, and specifying DDEs.

This all sounds amazing, but it will need a new DSL infrastructure. After a discussion to find out what kind of syntax would be necessary, it seems that a major overhaul of the @ode_def macro would be needed in order to support all of this. The next step will be to provide a new library, DiffEqDSL.jl, for providing this enhanced DSL. As described in the previously linked roadmap discussion, this DSL will likely take a form closer in style to JuMP, and the specs seem to be compatible at reaching the above mentioned goals. Importantly, this will be developed as a new DSL and thus the current @ode_def macro will be unchanged. This is a large development which will most likely not be in 3.0, but please feel free to contribute to the roadmap discussion which is now being continued at the new repository.

Conclusion

DifferentialEquations.jl 1.0 was about establishing a core that could unify differential equations. 2.0 was about developing the infrastructure to tackle a very vast domain of scientific simulations which are not easily or efficiently written as differential equations. 3.0 will be be about producing efficient methods for the most common sets of problems which haven’t adequately addressed yet. This will put the ecosystem in a very good state and hopefully make it a valuable tool for many people. After this, 4.0+ will keep adding algorithms, expand the problem domain some more, and provide a new DSL.

The post DifferentialEquations.jl 2.0: State of the Ecosystem appeared first on Stochastic Lifestyle.

6 Months of DifferentialEquations.jl: Where We Are and Where We Are Going

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/6-months-differentialequations-jl-going/

So around 6 months ago, DifferentialEquations.jl was first registered. It was at first made to be a library which can solve “some” types of differential equations, and that “some” didn’t even include ordinary differential equations. The focus was mostly fast algorithms for stochastic differential equations and partial differential equations.

Needless to say, Julia makes you too productive. Ambitions grew. By the first release announcement, much had already changed. Not only were there ordinary differential equation solvers, there were many. But the key difference was a change in focus. Instead of just looking to give a production-quality library of fast methods, a major goal of DifferentialEquations.jl became to unify the various existing packages of Julia to give one user-friendly interface.

Since that release announcement, we have enormous progress. At this point, I believe we have both the most expansive and flexible differential equation library to date. I would like to take this time to explain the overarching design, where we are, and what you can expect to see in the near future.

(Note before we get started: if you like what you see, please star the DifferentialEquations.jl repository. I hope to use this to show interest in the package so that one day we can secure funding. Thank you for your support!)

JuliaDiffEq Structure

If you take a look at the source for DifferentialEquations.jl, you will notice that almost all of the code has gone. What has happened?

The core idea behind this change is explained in another blog post on modular interfaces for scientific computing in Julia. The key idea is that we built an interface which is “package-independent”, meaning that the same lines of code can call solvers from different packages. There are many advantages to this which will come up later in the section which talks about the “addon packages”, but one organizational advantage is that it lets us split up the repositories as needed. The core differential equation solvers from DifferentialEquations.jl reside in OrdinaryDiffEq.jl, StochasticDiffEq.jl, etc. (you can see more at our webpage). Packages like Sundials.jl, ODEInterface.jl, ODE.jl, etc. all have bindings to this same interface, making them all work similarly.

One interesting thing about this setup is that you are no longer forced to contribute to these packages in order to contribute to the ecosystem. If you are a developer or a researcher in the field, you can develop your own package with your own license which has a common interface binding, and you will get the advantages of the common interface without any problems. This may be necessary for some researchers, and so we encourage you to join in and contribute as you please.

The Common Interface

Let me then take some time to show you what this common interface looks like. To really get a sense, I would recommend checking out the tutorials in the documentation and the extra Jupyter notebook tutorials in DiffEqTutorials.jl. The idea is that solving differential equations always has 3 steps:

  1. Defining a problem.
  2. Solving the problem.
  3. Analyzing the solution.

Defining a problem

What we created was a generic type-structure for which dispatching handles the details. For defining an ODE problem, one specifies the function for the ODE, the initial condition, and the timespan that the problem is to be solved on. The ODE

 u' = f(t,u)

with an initial condition u_0 and and timespan (t_0,t_f) is then written as:

prob = ODEProblem(f,u0,(t0,tf))

There are many different problem types for different types of differential equations. Currently we have types (and solvers) for ordinary differential equations, stochastic differential equations, differential algebraic equations, and some partial differential equations. Later in the post I will explain how this is growing.

Solving the problem

To solve the problem, the common solve command is:

sol = solve(prob,alg;kwargs...)

where alg is a type-instance for the algorithm. It is by dispatch on alg that the package is chosen. For example, we can call the 14th-Order Feagin Method from OrdinaryDiffEq.jl via

sol = solve(prob,Feagin14();kwargs...)

We can call the BDF method from Sundials.jl via

sol = solve(prob,CVODE_BDF();kwargs...)

Due to this structure (and the productivity of Julia), we have a ridiculous amount of methods which are available as is seen in the documentation. Later I will show that we do not only have many options, but these options tend to be very fast, often benchmarking as faster than classic FORTRAN codes. Thus one can choose the right method for the problem, and efficient solve it.

Notice I put in the trailing “kwargs…”. There are many keyword arguments that one is able to pass to this solve command. The “Common Solver Options” are documented at this page. Currently, all of these options are supported by the OrdinaryDiffEq.jl methods, while there is general support for large parts of this for the other methods. This support will increase overtime, and I hope to include a table which shows what is supported where.

Analyzing the solution

Once you have this solution type, what does it do? The details are explained in this page of the manual, but I would like to highlight some important features.

First of all, the solution acts as an array. For the solution at the ith timestep, you just treat it as an array:

sol[i]

You can also get the ith timepoint out:

sol.t[i]

Additionally, the solution lets you grab individual components. For example, the jth component at the ith timepoint is found by:

sol[i,j]

These overloads are necessary since the underlying data structure can actually be a more complicated vector (some examples explained later), but this lets you treat it in a simple manner.

Also, by default many solvers have the option “dense=true”. What this means is that the solution has a dense (continuous) output, which is overloaded on to the solver. This look like:

sol(t)

which gives the solution at time t. This continuous version of the solution can be turned off using “dense=false” (to get better performance), but in many cases it’s very nice to have!

Not only that, but there are some standard analysis functions available on the solution type as well. I encourage you to walk through the tutorial and see for yourself. Included are things like plot recipes for easy plotting with Plots.jl:

plot(sol)

Now let me describe what is available with this interface.

Ecosystem-Wide Development Tools and Benchmarks

Since all of the solutions act the same, it’s easy to create tools which build off of them. One fundamental toolset are those included in DiffEqDevTools.jl. DiffEqDevTools.jl includes a bunch of functions for things like convergence testing and benchmarking. This not only means that all of the methods have convergence tests associated with them to ensure accuracy and correctness, but also that we have ecosystem-wide benchmarks to know the performance of different methods! These benchmarks can be found at DiffEqBenchmarks.jl and will be referenced throughout this post.

Very Efficient Nonstiff ODE Solvers

The benchmarks show that the OrdinaryDiffEq.jl methods achieve phenomenal performance. While in many cases other libraries resort to the classic dopri5 and dop853 methods due to Hairer, in our ecosystem have these methods available via the ODEInterface.jl glue package ODEInterfaceDiffEq.jl and so these can be directly called from the common interface. From the benchmarks on non-stiff problems you can see that the OrdinaryDiffEq.jl methods are much more efficient than these classic codes when one is looking for the highest performance. This is even the case for DP5() and DP8() which have the same exact timestepping behavior as dopri5() and dop853() respectively, showing that these implementations are top notch, if not the best available.

These are the benchmarks on the implementations of the Dormand-Prince 4/5 methods. Also included is a newer method, the Tsitorous 4/5 method, which is now the default non-stiff method in DifferentialEquations.jl since our research has shown that it is more efficient than the classical methods (on most standard problems).

A Wide Array of Stiff ODE Solvers

There is also a wide array of stiff ODE solvers which are available. BDF methods can be found from Sundials.jl, Radau methods can be found from ODEInterface.jl, and a well-optimized 2nd-Order Rosenbrock method can be found in OrdinaryDiffEq.jl. One goal in the near future will be to implement higher order Rosenbrock methods in this fashion, since it will be necessary to get better performance, as shown in the benchmarks. However, the Sundials and ODEInterface methods, being that they use FORTRAN interop, are restricted to equations on Float64, while OrdinaryDiffEq.jl’s methods support many more types. This allows one to choose the best method for the job.

Wrappers for many classic libraries

Many of the classic libraries people are familiar with are available from the common interface, including:

  1. CVODE
  2. LSODA
  3. The Hairer methods

and differential algebraic equation methods including

  1. IDA (Sundials)
  2. DASKR

Native Julia Differential Algebraic Equation Methods

DASSL.jl is available on the common interface and provides a method to solve differential algebraic equations using a variable-timestep BDF method. This allows one to support some Julia-based types like arbitrary-precision numbers which are not possible with the wrapped libraries.

Extensive Support for Julia-based Types in OrdinaryDiffEq.jl

Speaking of support for types, what is supported? From testing we know that the following work with OrdinaryDiffEq.jl:

  1. Arbitrary precision arithmetic via BigFloats, ArbFloats, DecFP
  2. Numbers with units from Unitful.jl
  3. N-dimensional arrays
  4. Complex numbers (the nonstiff solvers)
  5. “Very arbitrary arrays”

Your numbers can be ArbFloats of 200-bit precision in 3-dimensional tensors with units (i.e. “these numbers are in Newtons”), and the solver will ensure that the dimensional constraints are satisfied, and at every timestep give you a 3-dimensional tensor with 200-bit ArbFloats. The types are declared to match the initial conditions: if you start with u0 having BigFloats, you will be guaranteed to have BigFloat solutions. Also, the types for time are determined by the types for the times in the solution interval (t0,tf). Therefore can have the types for time be different than the types for the solution (say, turn off adaptive timestepping and do fixed timestepping with rational numbers or integers).

Also, by “very arbitrary arrays” I mean, any type which has a linear index can be used. One example which recently came up in this thread involves solving a hybrid-dynamical system which has some continuous variables and some discrete variables. You can make a type which has a linear index over the continuous variables and simply throw this into the ODE solver and it will know what to do (and use callbacks for discrete updates). All of the details like adaptive timestepping will simply “just work”.

Thus, I encourage you to see how these methods can work for you. I myself have been developing MultiScaleModels.jl to build multi-scale hybrid differential equations and solve them using the methods available in DifferentialEquations.jl. This shows that heuristic for classic problems which you “cannot use a package for” no longer apply: Julia’s dispatch system allows DifferentialEquations.jl to handle these problems, meaning that there is no need for you to have to ever reinvent the wheel!

Event Handling and Callbacks in OrdinaryDiffEq.jl

OrdinaryDiffEq.jl already has extensive support for callback functions and event handling. The documentation page describes a lot of what you can do with it. There are many things you can do with this, not just bouncing a ball, but you can also use events to dynamically change the size of the ODE (as demonstrated by the cell population example).

Specification of Extra Functions for Better Performance

If this was any other library, the header would have been “Pass Jacobians for Better Performance”, but DifferentialEquations.jl’s universe goes far beyond that. We named this set of functionality Performance Overloads. An explicit function for a Jacobian is one type of performance overload, but you can pass the solvers many other things. For example, take a look at:

f(Val{:invW},t,u,γ,iW) # Call the explicit inverse Rosenbrock-W function (M - γJ)^(-1)

This seems like an odd definition: it is the analytical function for the equation (M - \gamma J)^{-1} for some mass matrix M built into the function. The reason why this is so necessary is because Rosenbrock methods have to solve this every step. What this allows the developers to do is write a method which goes like:

if has_invW(f)
  # Use the function provided by the user
else
  # Get the Jacobian
  # Build the W
  # Solve the linear system
end

Therefore, whereas other libraries would have to use a linear solver to solve the implicit equation at each step, DifferentialEquations.jl allows developers to write this to use the pre-computed inverses and thus get an explicit method for stiff equations! Since the linear solves are the most expensive operation, this can lead to huge speedups in systems where the analytical solution can be computed. But is there a way to get these automatically?

Parameterized Functions and Function Definition Macros

ParameterizedFunctions.jl is a library which solves many problems at one. One question many people have is, how do you provide the model parameters to an ODE solver? While the standard method of “use a closure” is able to work, there are many higher-order analyses which require the ability to explicitly handle parameters. Thus we wanted a way to define functions with explicit parameters.

The way that this is done is via call overloading. The syntax looks like this. We can define the Lotka-Volterra equations with explicit parameters a and b via:

type  LotkaVolterra <: Function
         a::Float64
         b::Float64
end
f = LotkaVolterra(0.0,0.0)
(p::LotkaVolterra)(t,u,du) = begin
         du[1] = p.a * u[1] - p.b * u[1]*u[2]
         du[2] = -3 * u[2] + u[1]*u[2]
end

Now f is a function where f.a and f.b are the parameters in the function. This type of function can then be seamlessly used in the DifferentialEquations.jl solvers, including those which use interop like Sundials.jl.

This is very general syntax which can handle any general function. However, the next question was, is there a way to do this better for the problems that people commonly encounter? Enter the library ParameterizedFunctions.jl. As described in the manual, this library (which is part of DifferentialEquations.jl) includes a macro @ode_def which allows you to define the Lotka-Volterra equation

\begin{align} \frac{dx}{dt} &= ax - bxy \\ \frac{dy}{dt} &= -cy + dxy \\ \end{align}

as follows:

f = @ode_def LotkaVolterraExample begin
  dx = a*x - b*x*y
  dy = -c*y + d*x*y
end a=>1.5 b=>1.0 c=>3.0 d=1.0

Notice that at the bottom you pass in the parameters. => keeps the parameters explicit, while = passes them in as a constant. Flip back and forth to see that it matches the LaTeX so that way it’s super easy to debug and maintain.

But this macro isn’t just about ease of use: it’s also about performance! What happens silently within this macro is that symbolic calculations occur via SymEngine.jl. The Performance Overloads functions are thus silently symbolically computed, allowing the solvers to then get maximal performance. This gives you an easy way to define a stiff equation of 100 variables in a very intuitive way, yet get a faster solution than you would find with other libraries.

Sensitivity Analysis

Once we have explicit parameters, we can generically implement algorithms which use them. For example, DiffEqSensitivity.jl transforms a ParameterizedFunction into the senstivity equations which are then solved using any ODE solver, outputting both the ODE’s solution and the parameter sensitivities at each timestep. This is described in the manual in more detail. The result is the ability to use whichever differential equation method in the common interface matches your problem to solve the extended ODE and output how sensitive the solution is to the parameters. This is the glory of the common interface: tools can be added to every ODE solver all at once!

In the future we hope to increase the functionality of this library to include functions for computing global and adjoint sensitivities via methods like the Morris method. However, the current setup shows how easy this is to do, and we just need someone to find the time to actually do it!

Parameter Estimation

Not only can we identify parameter sensitivities, we can also estimate model parameters from data. The design of this is described in more detail is explained in the manual. It is contained in the package DiffEqParamEstim.jl. Essentially, you can define a function using the @ode_def macro, and then pair it with any ODE solver and (almost) any optimization method, and together use that to find the optimal parameters.

In the near future, I would like to increase the support of DiffEqParamEstim.jl to include machine learning methods from JuliaML using its Learn.jl. Stay tuned!

Adaptive Timestepping for Stochastic Differential Equations

Adaptive timestepping is something you will not find in other stochastic differential equation libraries. The reason is because it’s quite hard to do correctly and, when finally implemented, can have so much overhead that it does not actually speedup the runtime for most problems.

To counteract this, I developed a new form of adaptive timestepping for stochastic differential equations which focused on both correctness and an algorithm design which allows for fast computations. The result was a timestepping algorithm which is really fast! This paper has been accepted to Discrete and Continuous Dynamical Systems Series B, and where we show that the correctness of the algorithm and its efficiency. We actually had to simplify the test problem so that way we could time the speedup over fixed timestep algorithms, since otherwise they weren’t able to complete in a reasonable time without numerical instability! When simplified, the speedup over all of the tested fixed timestep methods was >12x, and this speedup only increases as the problem gets harder (again, we chose the simplified version only because testing the fixed timestep methods on harder versions wasn’t even computationally viable!).

These methods, Rejection Sampling with Memory (RSwM), are available in DifferentialEquations.jl as part of StochasticDiffEq.jl. It should help speed up your SDE calculations immensely. For more information, see the publication “Adaptive Methods for Stochastic Differential Equations via Natural Embeddings and Rejection Sampling with Memory”.

Easy Multinode Parallelism For Monte Carlo Problems

Also included as part of the stochastic differential equation suite are methods for parallel solving of Monte Carlo problems. The function is called as follows:

monte_carlo_simulation(prob,alg;numMonte=N,kwargs...)

where the extra keyword arguments are passed to the solver, and N is the number of solutions to obtain. If you’ve setup Julia on a multinode job on a cluster, this will parallelize to use every core.

In the near future, I hope to expand this to include a Monte Carlo simulation function for random initial conditions, and allow using this on more problems like ODEs and DAEs.

Smart Defaults

DifferentialEquations.jl also includes a new cool feature for smartly choosing defaults. To use this, you don’t really have to do anything. If you’ve defined a problem, say an ODEProblem, you can just call:

sol = solve(prob;kwargs...)

without passing the algorithm and DifferentialEquations.jl will choose an algorithm for you. Also included is an `alg_hints` parameter with allows you to help the solver choose the right algorithm. So lets say you want to solve a stiff stochastic differential equations, but you do not know much about the algorithms. You can do something like:

sol = solve(prob,alg_hints=[:stiff])

and this will choose a good algorithm for your problem and solve it. This reduces user-burden to only having to know properties of the problem, while allowing us to proliferate the solution methods. More information is found in the Common Solver Options manual page.

Progress Monitoring

Another interesting feature is progress monitoring. OrdinaryDiffEq.jl includes the ability to use Juno’s progressbar. This is done via the keyword arguments like:

sol = solve(prob,progress=true,
                 progress_steps=100)

You can also set a progress message, for which the default is:

ODE_DEFAULT_PROG_MESSAGE(dt,t,u) = "dt="*string(dt)*"\nt="*string(t)*"\nmax u="*string(maximum(abs.(u)))

When you scroll over the progressbar, it will show you how close it is to the final timepoint and use linear extrapolation to estimate the amount of time left to solve the problem.

When you scroll over the top progressbar, it will display the message. Here, it tells us the current dt, t, and the maximum value of u (the independent variable) to give a sanity check that it’s all working.

The keyword argument progress_steps lets you control how often the progress bar updates, so here we choose to do it every 100 steps. This means you can do some very intense sanity checks inside of the progress message, but reduce the number of times that it’s called so that way it doesn’t affect the runtime.

All in all, having this built into the interface should make handling long and difficult problems much easier, I problem that I had when using previous packages.

(Stochastic) Partial Differential Equations

There is rudimentary support for solving some stochastic partial differential equations which includes semilinear Poisson and Heat equations. This is able to be done on a large set of domains using a finite element method as provided by FiniteElementDiffEq.jl. I will say that this library is in need of an update for better efficiency, but it shows how we are expanding into the domain of adding easy-to-define PDE problems, which then create the correct ODEProblem/DAEProblem discretization and which then gets solved using the ODE methods.

Modular Usage

While all of this creates the DifferentialEquations.jl package, the JuliaDiffEq ecosystem is completely modular. If you want to build a library which uses only OrdinaryDiffEq.jl’s methods, you can directly use those solvers without requiring the rest of DifferentialEquations.jl

An Update Blog

Since things are still changing fast, the website for JuliaDiffEq contains a news section which will describe updates to packages in the ecosystem as they occur. To be notified of updates, please subscribe to the RSS feed.

Coming Soon

Let me take a few moments to describe some works in progress. Many of these are past planning stages and have partial implementations. I find some of these very exciting.

Solver Method Customization

The most common reason to not use a differential equation solver library is because you need more customization. However, as described in this blog post, we have developed a design which solves this problem. The advantages huge. Soon you will be able to choose the linear and nonlinear solvers which are employed in the differential equation solving methods. For linear solvers, you will be able to use any method which solves a linear map. This includes direct solvers from Base, iterative solvers from IterativeSolvers.jl, parallel solvers from PETSc.jl, GPU methods from CUSOLVER.jl: it will be possible to even use your own linear solver if you wish. The same will be true for nonlinear solvers. Thus you can choose the internal methods which match the problem to get the most efficiency out.

Specialized Problem Types and Promotions

One interesting setup that we have designed is a hierarchy of types. This is best explained by example. One type of ODE which shows up are “Implicit-Explicit ODEs”, written as:

 u' = f(t,u) + g(t,u)

where f is a “stiff” function and g is a “nonstiff” function. These types of ODEs with a natural splitting commonly show up in discretizations of partial differential equations. Soon we will allow one to define an IMEXProblem(f,g,u0,tspan) for this type of ODE. Specialized methods such as the ARKODE methods from Sundials will then be able to utilize this form to gain speed advantages.

However, lets say you just wanted to use a standard Runge-Kutta method to solve this problem? What we will automatically do via promotion is make

h(t,u) = f(t,u) + g(t,u)

and then behind the scenes the Runge-Kutta method will solve the ODE

 u' = h(t,u)

Not only that, but we can go further and define

 k(t,u,u') = h(t,u) - u'

to get the equation

 k(t,u,u') = 0

which is a differential algebraic equation solver. This auto-promotion means that any method will be able to solve any problem type which is lower than it.

The advantages are two-fold. For one, it allows developers to write a code to the highest problem available, and automatically have it work on other problem types. For example, the classic Backwards Differentiation Function methods (BDF) which are seen in things like MATLAB’s ode15s are normally written to solve ODEs, but actually can solve DAEs. In fact, DASSL.jl is an implementation of this algorithm. When this promotion structure is completed, DASSL’s BDF method will be a native BDF method not just for solving DAEs, but also ODEs, and there is no specific development required on the part of DASSL.jl. And because Julia’s closures compile to fast functions, all of this will happen with little to no overhead.

In addition to improving developer productivity, it allows developers to specialize methods to problems. The splitting methods for implicit-explicit problems can be tremendously more performant since it reduces the complexity of the implicit part of the equation. However, with our setup we go even further. One common case that shows up in partial differential equations is that one of these equations is linear. For example, in a discretization of the semilinear Heat Equation, we arise at an ODE

 u' = Au + g(u)

where A is a matrix which is the discretization of the LaPlacian. What our ecosystem will allow is for the user to specify that the first function f(t,u) = Au is a linear function by wrapping it in a LinearMap type from LinearMaps.jl. Then the solvers can use this information like:

if is_linear(f)
  # Perform linear solve
else
  # Perform nonlinear solve
end

This way, the solvers will be able to achieve even more performance by specializing directly to the problem at hand. In fact, it will allow methods require this type of linearity like Exponential Runge-Kutta methods to be able to be developed for the ecosystem and be very efficient methods when applicable.

In the end, users can just define their ODE by whatever problem type makes sense, and promotion magic will make tons of methods available, and type-checking within the solvers will allow them to specialize directly to every detail of the ODE for as much speed as possible. With DifferentialEquations.jl also choosing smart default methods to solve the problem, the user-burden is decreased and very specialized methods can be used to get maximal efficiency. This is a win for everybody!

Ability to Solve Fun Types from ApproxFun.jl

ApproxFun.jl provides an easy way to do spectral approximations of functions. In many cases, these spectral approximations are very fast and are used to decompose PDEs in space. When paired with timestepping methods, this gives an easy way to solve a vast number of PDEs with really good efficiency.

The link between these two packages is currently found in SpectralTimeStepping.jl. Currently you can fix the basis size for the discretization and use that to solve the PDE with any ODE method in the common interface. However, we are looking to push further. Since OrdinaryDiffEq.jl can handle many different Julia-defined types, we are looking to make it support solving the ApproxFun.jl Fun type directly, which would allow the ODE solver to adapt the size of the spectral basis during the computation. This would tremendously speedup the methods and make it as fast as if you were to specifically design a spectral method to a PDE. We are really close to getting this!

New Methods for Stochastic Differential Equations

I can’t tell you too much about this because these methods will be kept secret until publication, but there are some very computationally-efficient methods for nonstiff and semi-stiff equations which have already been implemented and are being thoroughly tested. Go tell the peer review process to speedup if you want these quicker!

Improved Plot Recipes

There is already an issue open for improving the plot recipes. Essentially what will come out of this will be the ability to automatically draw phase plots and other diagrams from the plot command. This should make using DifferentialEquations.jl even easier than before.

Uncertainty Quantification

One major development in scientific computing has been the development of methods for uncertainty quantification. This allows you to quantify the amount of error that comes from a numerical method. There is already a design for how to use the ODE methods to implement a popular uncertainty quantification algorithm, which would allow you to see a probability distribution for the numerical solution to show the uncertainty in the numerical values. Like the sensitivity analysis and parameter estimation, this can be written in a solver-independent manner so that way it works with any solver on the common interface (which supports callbacks). Coming soon!

Optimal Control

We have in the works for optimal control problem types which will automatically dispatch to PDE solvers and optimization methods. This is a bit off in the distance, but is currently being planned.

Geometric and Symplectic Integrators

A newcomer to the Julia-sphere is GeometricIntegrators.jl. We are currently in the works for attaching this package to the common interface so that way it will be easily accessible. Then, Partitioned ODE and DAE problems will be introduced (with a promotion structure) which will allow users to take advantage of geometric integrators for their physical problems.

Bifurcation Analysis

Soon you will be able to take your ParameterizedFunction and directly generate bifurcation plots from it. This is done by a wrapper to the PyDSTool library via PyDSTool.jl, and a linker from this wrapper to the JuliaDiffEq ecosystem via DiffEqBifurcate.jl. The toolchain already works, but… PyCall has some nasty segfaults. When these segfaults are fixed in PyCall.jl, this functionality will be documented and released.

Models Packages

This is the last coming soon, but definitely not the least. There are already a few “models packages” in existence. What these packages do is provide functionality which makes it easy to define very specialized differential equations which can be solved with the ODE/SDE/DAE solvers. For example, FinancialModels.jl makes it easy to define common equations like Heston stochastic volatility models, which will then convert into the appropriate stochastic differential equation or PDE for use in solver methods. MultiScaleModels.jl allows one to specify a model on multiple scales: a model of proteins, cells, and tissues, all interacting dynamically with discrete and continuous changes, mixing in stochasticity. Also planned is PhysicalModels.jl which will allow you to define ODEs and DAEs just by declaring the Hamiltonian or Legrangian functions. Together, these packages should help show how the functionality of DifferentialEquations.jl reaches far beyond what previous differential equation packages have allowed, and make it easy for users to write very complex simulations (all of course without the loss of performance!).

Conclusion

I hope this has made you excited to use DifferentialEquaitons.jl, and excited to see what comes in the future. To support this development, please star the DifferentialEquations.jl repository. I hope to use these measures of adoption to one day secure funding. In the meantime, if you want to help out, join in on the issues in JuliaDiffEq, or come chat in the JuliaDiffEq Gitter chatroom. We’re always looking for more hands! And to those who have already contributed: thank you as this would not have been possible without each and every one of you.

The post 6 Months of DifferentialEquations.jl: Where We Are and Where We Are Going appeared first on Stochastic Lifestyle.

Introducing DifferentialEquations.jl

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/introducing-differentialequations-jl/

Differential equations are ubiquitous throughout mathematics and the sciences. In fact, I myself have studied various forms of differential equations stemming from fields including biology, chemistry, economics, and climatology. What was interesting is that, although many different people are using differential equations for many different things, pretty much everyone wants the same thing: to quickly solve differential equations in their various forms, and make some pretty plots to describe what happened.

The goal of DifferentialEquations.jl is to do exactly that: to make it easy solve differential equations with the latest and greatest algorithms, and put out a pretty plot. The core idea behind DifferentialEquations.jl is that, while it is easy to describe a differential equation, they have such diverse behavior that experts have spent over a century compiling different ways to think about and handle differential equations. Most users will want to just brush past all of the talk about which algorithms simply ask: “I have this kind of differential equation. What does the solution look like?”

DifferentialEquations.jl’s User Interface

To answer that question, the user should just have to say what their problem is, tell the computer to solve it, and then tell the computer to plot it. In DifferentialEquations.jl, we use exactly those terms. Let’s look at an Ordinary Differential Equation (ODE): the linear ODE . It is described as the function

 y^\prime = \alpha y

To use DifferentialEquations.jl, you first have to tell the computer what kind of problem you have, and what your data is for the problem. Recall the general ordinary differential equation is of the form

 y^\prime = f(y)

and initial condition u_0, so in this case, we have an ODE with data f(y)=\alpha y and u_0. DifferentialEquations.jl is designed as a software for a high-level language, Julia. There are many reasons for this choice, but the one large reason is its type system and multiple dispatch. For our example, we tell the machine what type of problem we have by building a DEProblem type. The code looks like this:

using DifferentialEquations
alpha = 0.5 #Setting alpha to 1/2
f(y,t) = alpha*y
u0 = 1.5
prob = ODEProblem(f,u0)

where prob contains everything about our problem. You can then tell the computer to solve it and give you a plot by, well, solve and plot:

timespan = [0,1] # Solve from time = 0 to time = 1
sol = solve(prob,timespan) # Solves the ODE
plot(sol) # Plots the solution using Plots.jl

And that’s the key idea: the user should simply have to tell the program what the problem is, and the program should handle the details. That doesn’t mean that the user won’t have access to to all of the details. For example, we can control the solver and plotters in more detail, using something like

sol = solve(prob,alg=:RK4) # Unrolled and optimzed RK4
plot(sol,lw=3) # All of the Plots.jl attributes are available

However, in many cases a user may approach the problem for which they don’t necessarily know anything about the algorithms involved in approximating the problem, and so obfuscating the API with these names is simply confusing. One place where this occurs is solving stochastic differential equations (SDEs). These have been recently growing in popularity in many of the sciences (especially systems biology) due to their added realism and their necessity when modeling rare and random phenomena. In DifferentialEquations.jl, you can get started by simply knowing that an SDE problem is defined by the functions f and g in the form

 dX_t = f(X_t,t)dt + g(X_t,t)dW_t,

with initial condition u_0, and so the steps for defining and solving the linear SDE is

g(u,t) = 0.3u
prob = SDEProblem(f,g,u0)
sol = solve(prob,timespan)
plot(sol)

If you wish to dig into the manual, you will see that the default solver that is used is a Rossler-SRI type of method and will (soon) have adaptivity which is complex enough to be a numerical analysis and scientific computing research project. And you can dig into the manual to find out how to switch to different solvers, but the key idea is that you don’t have to. Everything is simply about defining a problem, and asking for solutions and plots.

And that’s it. For more about the API, take a look at the documentation or the tutorial IJulia notebooks. What I want to discuss is why I believe this is the right choice, where we are, and where we can go with it.

What exactly does that give us?

Julia was created to solve the many-language problem in scientific computing. Before people would have to write out the inner loops as C/Fortran, and bind it to a scripting language that was never designed with performance in mind. Julia has done extremely well as solving this problem via multiple-dispatch. Multiple dispatch is not just about ease of use, but it is also the core of what allows Julia to be fast . From a quote I am stealing from IRC: “Julia: come for the fast loops, stay for the type-system”.

In my view, the many-language problem always had an uglier cousin: the many-API problem. Every package has its own way of interacting with the user, and it becomes a burden to remember how all of them work. However, in Julia there seems to be a beautiful emergence of packages which solve the many-API problem via Julia’s multiple-dispatch and metaprogramming functionalities. Take for example Plots.jl. There are many different plotting packages in Julia. However, through Plots.jl, you can plot onto any “backend” (other plotting package) using just one API. You can mix and match plotting in PyPlot (matplotlib), GR, Plotly, and unicode. It’s all the same commands. Another example of this is JuMP. Its core idea is solver independence: you take your optimization problem, define the model in JuMP’s lingo, and then plug into many different solvers all by flipping a switch.

DifferentialEquations.jl is extending this idea to the realm of differential equations. By using the keyword `alg=:ode45`, the solver can call functions from ODE.jl. And changing it to `alg=:dopri5`, DifferentialEquations.jl will solve your ODE problem using the coveted dopri5 Fortran software. The complexity of learning and understanding many different APIs is no longer a requirement for trying different algorithms.

But why “Differential Equations”? Isn’t that broad?

Sure, there are packages for solving various types of differential equations, all specializing in one little part. But when I was beginning my PhD, quickly found that these packages were missing something. The different types of differential equations that we encounter are not different but many times embody the same problem: a PDE when discretized is a system of ODEs, the probability distribution of evolving SDEs is a PDE (a form of the Heat Equation), and all of the tools that they use to get good performance are the same. Indeed, many contemporary research questions can be boiled down to addressing the question: what happens if we change the type of differential equation? What happens if we add noise to our ODEs which describe population dispersal? What happens if we add to our model that RNA production is reading a delayed signal? Could we make this high-dimensional PDE computationally feasible via a Monte Carlo experiment combined with Feynman-Kac’s theorem?

Yet, our differential equations libraries are separate. Our PDEs are kept separate from our SDEs, while our delay equations hang out in their own world. Mixing and matching solvers requires learning complex APIs, usually large C/Fortran libraries with opaque function names. That is what DifferentialEquations.jl is looking to solve. I am building DifferentialEquations.jl as a hub for differential equations, the general sense of the term.

If you have defined an SDE problem, then via the Forward Kolmorogov equation there is a PDE associated to the SDE. In many cases like the Black-Scholes model, both the SDE and the PDE are canonical ways of looking at the same problem. The solver should translate between them, and the solver should handle both types of differential equations. With one API and the functionality for these contained within the same package, no longer are they separate entities to handle computationally.

Where are we currently?

DifferentialEquations.jl is still very young. Indeed, the project only started a few months ago, and during that time period I was taking 6 courses. However, the package already has a strong core, including

In fact, there are already a lot of features which are unique to DifferentialEquations.jl:

You may have been thinking, “but I am a numerical analyst. How could this program help me?”. DifferentialEquations.jl has a bunch of functionalities for quickly designing and testing algorithms. All of the DEProblems allow for one to give them the analytical solution, and the solvers will then automatically calculate the errors. Thus by using some simple macros, one can define new algorithms in just a few lines of code, test the convergence, benchmark times, and have the algorithm available as an `alg` option in no time (note: all of the ODE solvers were written in one morning!). Thus it is easy to define the loop, and the rest of the functionality will come by default. It’s both a great way to test algorithms, and share algorithms. Contributing will both help you and DifferentialEquations.jl!.

Where are we going?

I have big plans for DifferentialEquations.jl. For example:

  • I will be rolling out an efficiency testing suite so that one could just specify the algorithms you’d like to solve a problem, and along what convergence axis (i.e. choose a few \Delta ts, or along changing tolerances), and it will output comparisons of the computational efficiencies and make some plots. It will be similar in functionality to the ConvergenceSimulation suite.
  • Finite difference methods for Heat and Poisson equation. These are long overdue for the research I do.
  • Changing the tableaus in ODEs and SDEs to StaticArrays so they are stack allocated. This has already been tested and works well on v0.5.
  • Higher-order methods for parabolic SPDEs (a research project with promising results!).
  • Blazing fast adaptivity for SDEs. (Once the paper I have submitted for it is published, it will be available. It’s already implemented!)
  • High-stability high order methods for SDEs (another research project).
  • Parallel methods. I have already implemented parallel (Xeon Phi) solvers and described them in previous blog posts. They simply need to be integrated into DifferentialEquations.jl. I would like to have native GPU solvers as well.
  • Delay and algebraic differential equations.
  • Wrapping more popular solvers. I’d like to add Sundials, LSODE, and PetsC to the list.
  • A web interface via Escher.jl to define DEProblems and get the solution plots. I am looking to have this hosted as an XSEDE Gateway.

If you’d like to influence where this project is going, please file an issue on the Github repository. I am always open for suggestions.

I hope this gives you a good idea on what my plans are for DifferentialEquations.jl. Check out the documentation and give the package a whirl!

The post Introducing DifferentialEquations.jl appeared first on Stochastic Lifestyle.