Parallel Neural Styles on Video Powered by Azure

Neural Styles is an algorithm based on neural networks that is used to learn
artistic styles. It is commonly used in apps like Prisma to beautify images.
The algorithm extracts features from a “style image” and then applies it to a
“content image”.

In a previous blog post, we described
the model and outlined the training process. We have now scaled our
algorithm from individual images to video, using Julia’s built-in parallel primitives
that make it trivially easy it is to scale algorithms to multiple nodes.

We ran the transformation on an Azure Data Science VM. The Azure DSVM is a pre-built image with many data science libraries included. It contains an installation of JuliaPro, and includes over a 100 major Julia packages. It is by far the easiest way to get started with running Julia code on the Azure ecosystem.

Our VM came with an Intel(R) Xeon(R) CPU E5-2660 with
8 physical cores and 56 GB RAM. We first extracted the frames from the video with VideoIO.jl, and then processed them in parallel with
8 Julia worker processes. Julia has several high level primitives for distributed computing, and we used
the parallel map pmap function to split the images amongst the workers and
subsequently texturize them. For larger videos, exactly the same code would be used to run this on a cluster of multiple physical or virtual servers.

@everywhere function f(a)
        texturize(a, "fire", "style")
end

pmap(x -> f(x), frames_from_video)

After the parallel processing, we stitched the images back together via ffmpeg to form a video.

Results

This is the original video:

These are our results on running the video through two models: “fire” and “frost”.

  • Fire:

  • Frost: