Lots of distributions are easy to evaluate (the density), but hard to sample. So when we need to sample such a distribution, we need to use some tricks. We’ll see connections between two of these: importance sampling and variational inference, and see a way to use them together for fast inference.
Importance sampling Importance sampling aims to make it easy to compute expected values. Say we have a distribution p, and we’d like to compute the average of some function f of the distribution (or equivalently, the expected value of a "push-forward along f").
Today I'm looking at the Distributions package. Let's get things rolling by loading it up. There's some overlap between the functionality in Distributions and what we saw yesterday in the StatsFuns package. So, instead of looking at functions to evaluate various aspects of PDFs and CDFs, we'll focus on sampling…
I know how to use Stan and I know how to use Turing. But how do those packages perform the posterior sampling for the underlying models. Can I write a posterior distribution down and get AdvancedHMC.jl to sample it? This is exactly what I want to do wi...