Growing a Compiler – Getting to Machine Learning from a General Purpose Compiler

The Compilers for Machine Learning workshop
was recently held at CGO 2019. Since
compiler techniques affect a large part of the machine learning stack,
this workshop aimed to highlight research that incorporates compiler
techniques and algorithms in optimizing machine learning
workloads. The workshop included talks from various projects – Julia (Julia Computing), TVM (UW), Glow (Facebook), XLA (Google), nGraph (Intel), TensorRT (Nvidia), and the soon to release MLIR (Google).

Our talk introduced the abstractions in the Julia language and the
kind of compiler transforms involved in implementing them. We then had
a deep dive into dynamic semantics + static analysis – our JAOT
(Just-Ahead-Of-Time) analysis. Building on these capabilities, the
Zygote system implements
automatic differentiation, effectively treating it as a compiler
problem, giving us differentiable programming for free. Finally,
compiler backends for
GPUs and
TPUs give us high performance
execution. All this comes together beautifully in Neural
ODEs
, which we had to
show off as our first slide!

Our
presentation
is available online. A PDF is also available in case Google Docs are blocked.