Tag Archives: TensorFlow

Engineering Trade-Offs in Automatic Differentiation: from TensorFlow and PyTorch to Jax and Julia

By: Christopher Rackauckas

Re-posted from: http://www.stochasticlifestyle.com/engineering-trade-offs-in-automatic-differentiation-from-tensorflow-and-pytorch-to-jax-and-julia/

To understand the differences between automatic differentiation libraries, let’s talk about the engineering trade-offs that were made. I would personally say that none of these libraries are “better” than another, they simply all make engineering trade-offs based on the domains and use cases they were aiming to satisfy. The easiest way to describe these trade-offs is to follow the evolution and see how each new library tweaked the trade-offs made of the previous.

Early TensorFlow used a graph building system, i.e. it required users to essentially define variables in a specific graph language separate from the host language. You had to define “TensorFlow variables” and “TensorFlow ops”, and the AD would then be performed on this static graph. Control flow constructs were limited to the constructs that could be represented statically. For example, an `ifelse` function statement is very different from a conditional `if` then `else` of code because `ifelse` would semantically be the same as always calling both branches and then choosing the result, thus only having a single code path (though I say semantically because further compiler optimizations may and usually do reduce that). This static sublanguage is then represented in an intermediate representation (IR) known as XLA which then performed a lot of simplification of linear algebra, and AD was done using the simple graph representation algorithms given that there was no true control flow at this representation. While this gives a lot of efficiency (XLA is great for simplification because it can easily see the whole world), it of course had some major downsides in terms of flexibility and convenience.

Thus you can almost think of this as a source code transformation because all of the autodiff is done on essentially an IR for a language which is not the same as the host language, but for the most part it was requiring the user does the translation to the new language for the AD system which is… rather inconvenient.

PyTorch came along to solve the flexibility and convenience issues by instead using a tape-based method. It generates the code to autodiff every time you run the forward pass by simply storing the operation that it sees in a given forward pass, and then differentiates that set of operations in reverse. This “building of the tape” is done by operator overloading as part of the Tensor type PyTorch says you need to use. How it works is easy to see. For example, f(2.0) would take the first branch of the if statement and then run the while loop 5 times. So then the AD pass would take that set of operations and start running backpropagation through 5 passes of the while loop and back through the first branch. Notice that by using this form, the AD does not “see” any dynamic control flow: that was all in Python, but not in the tape. Thus the AD does not have to handle dynamic control flow, and this makes it very easy to handle a lot of odd cases of the language. The downside to this approach though is that the AD is “per value”, i.e. you cannot do a lot of optimizations on the backwards passes because you will not necessarily ever see the same backwards pass again, and this allows for a lot less optimization.

Does this harm PyTorch’s efficiency beyond repair? Well, no and yes. No it does not harm efficiency in the sense of, most machine learning algorithms are so heavily reliant on expensive kernels, such as matrix multiplication (`A*x`), `conv`, etc., so the amount of work per operation is extremely high in most ML applications that it hides the overhead of this approach. This allows the PyTorch team to spend most of its time optimizing the 2,000+ operators that it provides, and so most people in ML see PyTorch as fast because it comes with fast kernels (fast conv calls, fast GPU linear algebra) despite the AD overhead. That said, you can very easily run into cases where AD and Python interpreter overhead are not washed out. Cases of that are where your arrays are small or where a lot of scalar operations are happening, for example the Julia vs PyTorch Neural ODE benchmarks on cases matching scientific model discovery workflows you see a 100x performance improvement in Julia (even major differences without AD in the ODE and SDE solvers), and can mostly be attributed to language and AD overhead due to the small kernels used in these cases. For this reason the PyTorch team has been working on things like `torch.@jit` as a separate sublanguage that can compile and optimize differently from the rest of the code, specific to handling these cases, though there’s a lot of discussion of the long-term viability of that approach. But anyways, PyTorch has done really well because it made good choices for its domain of use.

So then TensorFlow Eager (2.0) comes around as adds dynamic control flow support in a manner similar to PyTorch as a sad attempt to get everyone back, but of course then it doesn’t play nicely will all of the XLA tooling (because it cannot see the whole graph of all possible operations for all input values to optimize it well) so it didn’t hit the TensorFlow speeds everyone was expecting, so it was kind of the worst of all worlds.

Subsequent tools then all sought ways to either expand the domains of these ideas or try to mix some of the advantages of the two sides. Jax is one of those. Jax uses non-standard interpretation to build a copy of the full code in its own IR to then perform AD on, finally lowering it to TensorFlow’s XLA for optimizations. Jax’s non-standard interpretation is kind of like operator overloading in that it has special objects walk through code in order to build out the exprs (this is called the “tracing” step). But wait, how is it able to trace the full code if there’s dynamic control flow, won’t it have the same issues as PyTorch that it only sees parts of the full code’s potential paths? Indeed that is true, and that’s why it doesn’t want you to use full dynamic control flow and instead use Jax primitives like lax.while, which are function calls that can be caught during tracing to avoid the code having true dynamic behavior at trace time. Also, for this to be true you need that what your function does can be completely determined by its inputs, i.e. the functions must be “pure”. For this reason Jax requires programming in a functional style with pure functions rather than the object-oriented standard of Python, thus a notable trade-off of the abstract interpretation approach. But what you essentially get is a more natural graph builder for TensorFlow, because at the end of the day it ends up in TensorFlow’s XLA IR, and so you get the same efficiency there but in a form that can look and feel a lot more natural. The downside of course is that you still don’t have true dynamism which is why those linked primitives exist, and why they are not well optimized as described in Jax – The Sharp Bits. However, “most” ML algorithms don’t use very much dynamism (example: recurrent neural networks know how many layers they have, they don’t have a while loop iterate to tolerance), and so “most” algorithms tend to do well in this sublanguage. In that sense, it can optimize a lot of codes rather naturally.

What about keeping dynamism in the AD?

This of course then begs the question, is it possible to keep the full dynamism of the host language in the AD system? It is possible, but it is hard. This is what a lot of the Julia AD tools have focused on with source code transformations (along with Swift for TensorFlow). However, since source code is “for humans”, it can be a rather difficult level to algorithmically work on. Thus instead these tools work on lowered IR, where these lowered representations remove a lot of the “cruft” of syntax to give a much smaller support surface. This was the core of Zygote.jl’s approach where it saw that by acting on the SSA IR it could directly support control flow like while loops without unrolling them into sets of operations (like PyTorch or TensorFlow Eager) or only supporting a sublanguage of control flow (like Jax). This is essentially done by converting while loops and other dynamic constructs into static (source code) representations that have new lines of code in there for things like stacks that keep information about the forward pass (like which branch is taken), and then these stacks are accessed and used in the generated backwards pass. Thus what code is generated is not dynamic (a while loop forward gives a for loop in reverse), but the generated backwords pass is dynamic (because it uses the stack to tell it how many times to walk the for loop). This allows AD to have a single code for all branches (unlike the tape-building forms) and thus it can optimize more like TensorFlow but in a world where the dynamic control flow is not eliminated.

Well that sounds like the best of both worlds, so why isn’t everyone using it? There’s two factors involved in that. First, accepting that your AD will have to deal with the full dynamic nature of an entire programming language means accepting a much more difficult job. The whole purpose of the AD approaches in TensorFlow/PyTorch/Jax is for these constructs to be eliminated before the AD, so they have a much smaller surface of language support required. Because of this added complexity, this pretty much guarantees you cannot use Python because it’s such a crazy language in terms of what it allows with dynamism (fun fact, the Jax folks at Google Brain did have a Python source code transform AD at one point but it was scrapped essentially because of these difficulties), and so people working on these solutions flocked to languages with clear syntax that is easy for compilers to optimize, i.e. Julia and Swift. Python has most of the ML crowd, so that creates a barrier to entry.

But even then, the problem is still very hard. In Julia it was found that Zygote acts on too high of an IR, i.e. before compiler optimizations, which then requires you do AD on unoptimized code only delete most of the work later, and so it would be better for it to go even lower. This is the reason why the Diffractor.jl project started. But there’s even a reason to act lower, since some optimization only occurs at the LLVM level, which is why Julia developers started directly building an AD system that acts on LLVM’s IR itself known as Enzyme (note that while this project included members of the Julia Lab like Valentin, because it acts at the LLVM level it is applicable to any LLVM compiled language, such as C/C++ (Clang) or Rust). There is then a trade-off that occurs with source code transform methods as you go lower and lower in the IRs which I describe in a separate post. tl;dr there: Enzyme can act after compiler optimizations so much of the higher level information might be deleted (at least, without completion of dialects like MLIR which aren’t quite ready). Enzyme only sees the barest of low level code so it may not have the high level linear algebra definitions to do all of the linear algebra simplifications, like how XLA will fuse many matrix-vector multiplications into a matrix-matrix multiplication, since some of the function calls may have been inlined and deleted. Optimizing this remining loopy code to reach BLAS speeds is thus as hard as generating looping code that reaches BLAS speeds, and history shows this is hard but not impossible. Additionally, function calls to a nonlinear solver may have already been deleted, so optimized adjoints which outperform the direct differentiation of code, like in the case of Deep Equilibrium Models (DEQs), may end up less optimized. But that lowest level allows for very efficient scalar code differentiation and mutation support. On the other hand, Diffractor uses Julia’s typed IR so it can apply higher level rules easily and consistently, and in theory it can do transformations similar to XLA (i.e. keeping BLAS calls intact and fusing them). But writing such analyses on a fully dynamic compute IR is difficult enough that it has not been done. Tooling around escape analysis and shape propagation are being built to try and enable such optimizations, but the fact remains that it’s a lot more work to do it on a language IR instead of a sublanguage graph like XLA. In theory you could have compiler passes prove that a function is semi-static in the sense of XLA and get the same optimizations as Jax or TensorFlow, but that doesn’t happen today and it’s not easy to do. The future of Julia AD systems will likely mix the Enzyme and Diffractor approaches to tackle this issue, but the clear trade-off being made here is generality at the cost of implementation complexity.

The second factor, and probably the more damning one, is that most ML codes don’t actually use that much dynamism. Recurrent neural networks, transformers, convolutions, etc. all have simpler forms of dynamism which in some sense is quite static. That’s an important trade-off most people don’t always consider: why solve problems your users don’t have? The number of layers you have do not depend on the values coming out of the layers. Support for dynamism for ML workflows is thus mostly about convenience, not necessity. When algorithms do have dynamism, in most cases you can get away with wrapping it as an operation in the language, i.e. defining a function and defining the adjoint derivative for that function. This for example is how Jax supports ODEs even though adaptive ODE solvers require knowing the calculated values in order to determine the number of steps. You cannot differentiate an ODE code with Jax, but if you use an ODE solver with a defined adjoint you are okay. While this does mean that some algorithms are not possible with Jax (at least without forgoing a lot of optimizations), and algorithms where differntiating solvers is fundamentally different from adjoint definitions can limit which performance/stability trade-offs can be made (see the supplemental section 8 for details in the case of ODEs about stability of “discrete adjoints”]), these factors seem to be rather rare in standard ML use cases which is why most people haven’t bothered to learn a new programming language to get around these issues.

That leaves us where we are today. Are more ML algorithms of the future going to require handling more dynamic structures? Is optimizing scalar and mutating code going to be important for people using AD systems? The reason why I know this story so well is because the answer for my domain, scientific machine learning (SciML), is yes. Climate models use mutation because reallocating huge buffers would greatly effect performance. Adaptive solvers on stiff equations are a fact of life, so simple adjoints used in PyTorch and Jax are unstable and simply give Inf as the gradients in these cases. Time will tell whether this physics-informed, expert-guided, science-guided, scientific machine learning domain becomes standard, but hopefully this describes how all of the choices made here were not “better” or “worse”, but instead it’s all about domain-specific engineering trade-offs.

The post Engineering Trade-Offs in Automatic Differentiation: from TensorFlow and PyTorch to Jax and Julia appeared first on Stochastic Lifestyle.

Moving to Julia 1.1

By: Sören Dobberschütz

Re-posted from: https://tensorflowjulia.blogspot.com/2019/03/moving-to-julia-11.html

Moving from Julia 0.6 to 1.1


Finally, all files in the GitHub repository have been updated to be able to run on Julia 1.1. In order to be able to run them (at the time of writing), the developmental versions of the Tensorflow.jl and PyCall.jl packages need to be installed. Some notable changes are listed below:

General

  • The shuffle command has been moved to the Random package as Random.shuffle().
  • linspace has been replaced with range(start, stop=…, length=…).
  • To determine the length of a matrix, use size(…,2) instead of length().
  • For accessing the first and last part of datasets, head() as been replaced with first(), and tail() with last().
  • For arithmetic calculations involving constants and arrays, the better syntax const .- array needs to be used instead of const – array. In connection with this, a space might be required before the operator; i.e. to avoid confusion with number types use 1 .- array, and not 1.-array.
  • Conversion of a dataframe df to a matrix can be done via convert(Matrix, df) instead of convert(Array, df).

Exercise 8

  • The creation of an initially undefined vector etc. now requires the undef keyword as in activation_functions = Vector{Function}(undef, size(hidden_units,1)).
  • An assignment of a function to multiple entries of a vector requires the dot-operator: activation_functions[1:end-1] .= z->nn.dropout(nn.relu(z), keep_probability)

Exercise 10

  • For this exercise to work, the MNIST.jl package needs an update. A quick fix can be found in the repository together with the exercise notebooks.
  • Instead of flipdim(A, d), use reverse(A, dims=d).
  • indmax has been replaced by argmax.
  • Parsing expressions has been moved to the Meta package and can be done by Meta.parse(…).

Exercise 11

  • The behaviour of taking the transpose of a vector has been changed – it now creates an abstract linear algebra object. We use collect() on the transpose for functions that cannot handle the new type.
  • To test if a string contains a certain word, contains() has been replaced by occursin().




Intro to Sparse Data and Embeddings

By: Sören Dobberschütz

Re-posted from: https://tensorflowjulia.blogspot.com/2018/09/intro-to-sparse-data-and-embeddings.html

This is the final exercise of Google’s Machine Learning Crash Course. We use the ACL 2011 IMDB dataset to train a Neural Network in predicting wether a movie review is favourable or not, based on the words used in the review text.

There are two notable differences from the original exercise:

  • We do not build a proper input pipeline for the data. This creates a lot of computational overhead – in principle, we need to preprocess the whole dataset before we start training the network. In practise, this if often not feasible. It would be interesting to see how such a pipeline can be implemented for TensorFlow.jl. The Julia package MLLabelUtils.jl might come handy for this task.
  • When visualizing the embedding layer, our Neural Network builds effectively a 1D-representation of keywords to describe if a movie has a favorable review or not. In the Python version, a real 2D embedding is obtained (see the pictures). The reasons for this difference are unknown.


Julia embedding – effectively a 1D line

Python embedding




The Jupyter notebook can be downloaded here






This notebook is based on the file Embeddings programming exercise, which is part of Google’s Machine Learning Crash Course.
In [0]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Intro to Sparse Data and Embeddings

Learning Objectives:
  • Convert movie-review string data to a feature vector
  • Implement a sentiment-analysis linear model using a feature vector
  • Implement a sentiment-analysis DNN model using an embedding that projects data into two dimensions
  • Visualize the embedding to see what the model has learned about the relationships between words
In this exercise, we’ll explore sparse data and work with embeddings using text data from movie reviews (from the ACL 2011 IMDB dataset). Open and run TFrecord Extraction.ipynb Colaboratory notebook to extract the data from the original .tfrecord file as Julia variables.

Setup

Let’s import our dependencies and open the training and test data. We have exported the test and training data as hdf5 files in the previous step, so we use the HDF5-package to load the data.
In [1]:
using Plots
using Distributions
gr()
using DataFrames
using TensorFlow
import CSV
import StatsBase
using PyCall
@pyimport sklearn.metrics as sklm
using Images
using Colors
using HDF5

sess=Session(Graph())
Out[1]:
Session(Ptr{Void} @0x0000000128c2ac30)
Open the test and training raw data sets.
In [2]:
c = h5open("train_data.h5", "r") do file
global train_labels=read(file, "output_labels")
global train_features=read(file, "output_features")
end
c = h5open("test_data.h5", "r") do file
global test_labels=read(file, "output_labels")
global test_features=read(file, "output_features")
end
train_labels=train_labels'
test_labels=test_labels';
2018-09-16 16:21:02.045635: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
Have a look at one data set:
In [4]:
test_labels[301,:]
Out[4]:
1-element Array{Float32,1}:
1.0
In [5]:
test_features[301]
Out[5]:
"['\"' 'don' \"'\" 't' 'change' 'your' 'husband' '\"' 'is' 'another' 'soap'\n 'opera' 'comedy' 'from' 'producer' '/' 'director' 'cecil' 'b' '.' 'de'\n 'mille' '.' 'it' 'is' 'notable' 'as' 'the' 'first' 'of' 'several' 'films'\n 'he' 'made' 'starring' 'gloria' 'swanson' '.' 'i' 'guess' 'you' 'could'\n 'also' 'call' 'it' 'a' 'sequel' 'of' 'sorts' 'to' 'his' '\"' 'old' 'wives'\n 'for' 'new' '\"' '(' '####' ')' '.' 'james' '(' 'elliot' 'dexter' ')'\n 'and' 'leila' '(' 'swanson' ')' 'porter' 'are' 'a' 'fortyish' 'couple'\n 'where' 'james' 'has' 'gone' 'to' 'seed' 'and' 'become' 'slovenly' 'and'\n 'lazy' '.' 'he' 'has' 'a' 'penchant' 'for' 'smelly' 'cigars' 'and'\n 'eating' 'raw' 'onions' '.' 'he' 'takes' 'his' 'wife' 'for' 'granted' '.'\n 'leila' 'tries' 'to' 'get' 'him' 'to' 'straighten' 'out' 'to' 'no'\n 'avail' '.' 'one' 'night' 'at' 'a' 'dinner' 'party' 'at' 'the' 'porters'\n ',' 'leila' 'meets' 'the' 'dashing' 'schyler' 'van' 'sutphen' '(' 'now'\n \"there's\" 'a' 'moniker' ')' ',' 'the' 'playboy' 'nephew' 'of' 'socialite'\n 'mrs' '.' 'huckney' '(' 'sylvia' 'ashton' ')' '.' 'she' 'invites' 'leila'\n 'to' 'her' 'home' 'for' 'the' 'weekend' 'to' 'make' 'james' '\"' 'miss'\n 'her' '\"' '.' 'once' 'there' 'schyler' 'begins' 'to' 'put' 'the' 'moves'\n 'on' 'her' ',' 'promising' 'her' 'pleasure' ',' 'wealth' 'and' 'love' ','\n 'if' 'she' 'will' 'leave' 'her' 'husband' 'and' 'go' 'with' 'him' '.'\n 'the' 'sequences' 'involving' \"leila's\" 'imagining' 'this' 'promised'\n 'new' 'life' 'are' 'lavishly' 'staged' 'and' 'forecast' 'de' \"mille's\"\n 'epic' 'costume' 'drams' 'later' 'in' 'his' 'career' '.' 'leila' ','\n 'bored' 'with' 'her' 'marriage' 'and' 'her' 'disinterested' 'husband' ','\n 'divorces' 'james' 'and' 'marries' 'the' 'playboy' '.' 'james'\n 'ultimately' 'realizes' 'that' 'he' 'has' 'lost' 'the' 'only' 'thing'\n 'that' 'mattered' 'to' 'him' 'and' 'begins' 'to' 'mend' 'his' 'ways' '.'\n 'he' 'shaves' 'off' 'his' 'mustache' ',' 'works' 'out' ',' 'shuns'\n 'onions' 'and' 'reacquires' 'some' 'manners' '.' 'meanwhile' ',' 'all'\n 'is' 'not' 'rosy' 'with' \"leila's\" 'new' 'marriage' '.' 'schyler' 'it'\n 'seems' 'likes' 'to' 'gamble' 'and' 'has' 'taken' 'up' 'with' 'the'\n 'gold' 'digging' 'nanette' '(' 'aka' 'tootsie' ',' 'or' 'some' 'such'\n 'name' ')' '(' 'julia' 'faye' ')' '.' 'schyler' 'loses' 'all' 'of' 'his'\n 'money' 'and' 'steals' \"leila's\" 'diamond' 'ring' 'to' 'cover' 'his'\n 'losses' '.' 'one' 'fateful' 'day' ',' 'leila' 'meets' 'the' '\"' 'new'\n '\"' 'james' 'and' 'is' 'taken' 'by' 'the' 'changes' 'in' 'him' '.'\n 'james' 'drives' 'her' 'home' 'and' 'becomes' 'aware' 'of' 'her'\n 'situation' 'and' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.'\n '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.'\n '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.' '.'\n 'this' 'film' 'marked' 'the' 'beginning' 'of' 'gloria' \"swanson's\" 'rise'\n 'to' 'super' 'stardom' 'in' 'a' 'career' 'that' 'would' 'rival' 'that'\n 'of' 'mary' 'pickford' '.' 'barely' '##' 'years' 'of' 'age' ',' 'she'\n 'had' 'begun' 'her' 'career' 'in' 'mack' 'sennett' 'two' 'reel'\n 'comedies' 'as' 'a' 'teen' 'ager' '.' 'elliot' 'dexter' 'was' 'almost'\n '##' 'at' 'this' 'time' 'but' 'he' 'and' 'swanson' 'make' 'a' 'good'\n 'team' ',' 'although' \"it's\" 'hard' 'to' 'imagine' 'anyone' 'tiring' 'of'\n 'the' 'lovely' 'miss' 'swanson' 'as' 'is' 'the' 'case' 'in' 'this' 'film'\n '.' 'dexter' 'and' 'sylvia' 'ashton' 'had' 'appeared' 'in' 'the'\n 'similar' '\"' 'old' 'wives' 'for' 'new' '\"' 'where' 'the' 'wife' 'had'\n 'gone' 'to' 'seed' 'and' 'the' 'husband' 'was' 'wronged' '.' 'also' 'in'\n 'the' 'cast' 'are' 'de' 'mille' 'regulars' 'theodore' 'roberts' 'as' 'a'\n 'bishop' 'and' 'raymond' 'hatton' 'as' 'a' 'gambler' '.']"

Building a Sentiment Analysis Model

Let’s train a sentiment-analysis model on this data that predicts if a review is generally favorable (label of 1) or unfavorable (label of 0).
To do so, we’ll turn our string-value terms into feature vectors by using a vocabulary, a list of each term we expect to see in our data. For the purposes of this exercise, we’ve created a small vocabulary that focuses on a limited set of terms. Most of these terms were found to be strongly indicative of favorable or unfavorable, but some were just added because they’re interesting.
Each term in the vocabulary is mapped to a coordinate in our feature vector. To convert the string-value terms for an example into this vector format, we encode such that each coordinate gets a value of 0 if the vocabulary term does not appear in the example string, and a value of 1 if it does. Terms in an example that don’t appear in the vocabulary are thrown away.
NOTE: We could of course use a larger vocabulary, and there are special tools for creating these. In addition, instead of just dropping terms that are not in the vocabulary, we can introduce a small number of OOV (out-of-vocabulary) buckets to which you can hash the terms not in the vocabulary. We can also use a feature hashing approach that hashes each term, instead of creating an explicit vocabulary. This works well in practice, but loses interpretability, which is useful for this exercise.

Building the Input Pipeline

First, let’s configure the input pipeline to import our data into a TensorFlow model. We can use the following function to parse the training and test data and return an array of the features and the corresponding labels.
In [3]:
function create_batches(features, targets, steps, batch_size=5, num_epochs=0)
"""Create batches.

Args:
features: Input features.
targets: Target column.
steps: Number of steps.
batch_size: Batch size.
num_epochs: Number of epochs, 0 will let TF automatically calculate the correct number
Returns:
An extended set of feature and target columns from which batches can be extracted.
"""
if(num_epochs==0)
num_epochs=ceil(batch_size*steps/size(features,1))
end

features_batches=copy(features)
target_batches=copy(targets)

for i=1:num_epochs
select=shuffle(1:size(features,1))
if i==1
features_batches=(features[select,:])
target_batches=(targets[select,:])
else
features_batches=vcat(features_batches, features[select,:])
target_batches=vcat(target_batches, targets[select,:])
end
end
return features_batches, target_batches
end
Out[3]:
create_batches (generic function with 3 methods)
In [4]:
function construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.

Args:
input_features: The numerical input features to use.
Returns:
A set of feature columns
"""
out=convert(Array, input_features[:,:])
return convert.(Float64,out)
end
Out[4]:
construct_feature_columns (generic function with 1 method)
In [5]:
function next_batch(features_batches, targets_batches, batch_size, iter)
"""Next batch.

Args:
features_batches: Features batches from create_batches.
targets_batches: Target batches from create_batches.
batch_size: Batch size.
iter: Number of the current iteration
Returns:
A batch of features and targets.
"""
select=mod((iter-1)*batch_size+1, size(features_batches,1)):mod(iter*batch_size, size(features_batches,1));

ds=features_batches[select,:];
target=targets_batches[select,:];

return ds, target
end
Out[5]:
next_batch (generic function with 1 method)
In [6]:
function my_input_fn(features_batches, targets_batches, iter, batch_size=5, shuffle_flag=1):
"""Prepares a batch of features and labels for model training.

Args:
features_batches: Features batches from create_batches.
targets_batches: Target batches from create_batches.
iter: Number of the current iteration
batch_size: Batch size.
shuffle_flag: Determines wether data is shuffled before being returned
Returns:
Tuple of (features, labels) for next data batch
"""

# Construct a dataset, and configure batching/repeating.
ds, target = next_batch(features_batches, targets_batches, batch_size, iter)

# Shuffle the data, if specified.
if shuffle_flag==1
select=shuffle(1:size(ds, 1));
ds = ds[select,:]
target = target[select, :]
end

# Return the next batch of data.
return ds, target
end
Out[6]:
my_input_fn (generic function with 3 methods)

Task 1: Use a Linear Model with Sparse Inputs and an Explicit Vocabulary

For our first model, we’ll build a Linear Classifier model using 50 informative terms; always start simple!
The following code constructs the feature column for our terms.
In [7]:
# 50 informative terms that compose our model vocabulary 
informative_terms = ["bad", "great", "best", "worst", "fun", "beautiful",
"excellent", "poor", "boring", "awful", "terrible",
"definitely", "perfect", "liked", "worse", "waste",
"entertaining", "loved", "unfortunately", "amazing",
"enjoyed", "favorite", "horrible", "brilliant", "highly",
"simple", "annoying", "today", "hilarious", "enjoyable",
"dull", "fantastic", "poorly", "fails", "disappointing",
"disappointment", "not", "him", "her", "good", "time",
"?", ".", "!", "movie", "film", "action", "comedy",
"drama", "family"]
Out[7]:
50-element Array{String,1}:
"bad"
"great"
"best"
"worst"
"fun"
"beautiful"
"excellent"
"poor"
"boring"
"awful"
"terrible"
"definitely"
"perfect"

"her"
"good"
"time"
"?"
"."
"!"
"movie"
"film"
"action"
"comedy"
"drama"
"family"
The following function takes the input data and vocabulary and converts the data to a one-hot encoded matrix.
In [8]:
# function for creating categorial colum from vocabulary list in one hot encoding
function create_data_columns(data, informative_terms)
onehotmat=zeros(length(data), length(informative_terms))

for i=1:length(data)
string=data[i]
for j=1:length(informative_terms)
if contains(string, informative_terms[j])
onehotmat[i,j]=1
end
end
end
return onehotmat
end
Out[8]:
create_data_columns (generic function with 1 method)
In [9]:
train_feature_mat=create_data_columns(train_features, informative_terms)
test_features_mat=create_data_columns(test_features, informative_terms);
Next, we’ll construct the Linear Classifier model, train it on the training set, and evaluate it on the evaluation set. After you read through the code, run it and see how you do.
In [10]:
function train_linear_classifier_model(learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets)
"""Trains a linear classifier model.

Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples, etc: The input data.
Returns:
weight: The weights of the linear model.
bias: The bias of the linear model.
validation_probabilities: Probabilities for the validation examples.
p1: Plot of loss function for the different periods
"""

periods = 10
steps_per_period = steps / periods

# Create feature columns.
feature_columns = placeholder(Float32)
target_columns = placeholder(Float32)
eps=1E-8

# these two variables need to be initialized as 0, otherwise method gives problems
m=Variable(zeros(size(training_examples,2),1).+0.0)
b=Variable(0.0)

ytemp=nn.sigmoid(feature_columns*m + b)
y= clip_by_value(ytemp, 0.0, 1.0)
loss = -reduce_mean(log(y+eps).*target_columns + log(1-y-eps).*(1-target_columns))

features_batches, targets_batches = create_batches(training_examples, training_targets, steps, batch_size)

# Advanced Adam optimizer decent with gradient clipping
my_optimizer=(train.AdamOptimizer(learning_rate))
gvs = train.compute_gradients(my_optimizer, loss)
capped_gvs = [(clip_by_norm(grad, 5.0), var) for (grad, var) in gvs]
my_optimizer = train.apply_gradients(my_optimizer,capped_gvs)

run(sess, global_variables_initializer()) #this needs to be run after constructing the optimizer!

# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
println("Training model...")
println("LogLoss (on training data):")
training_log_losses = []
validation_log_losses=[]
for period in 1:periods
# Train the model, starting from the prior state.
for i=1:steps_per_period
features, labels = my_input_fn(features_batches, targets_batches, convert(Int,(period-1)*steps_per_period+i), batch_size)
run(sess, my_optimizer, Dict(feature_columns=>construct_feature_columns(features), target_columns=>construct_feature_columns(labels)))
end
# Take a break and compute predictions.
training_probabilities = run(sess, y, Dict(feature_columns=> construct_feature_columns(training_examples)));
validation_probabilities = run(sess, y, Dict(feature_columns=> construct_feature_columns(validation_examples)));

# Compute loss.
training_log_loss=run(sess,loss,Dict(feature_columns=> construct_feature_columns(training_examples), target_columns=>construct_feature_columns(training_targets)))
validation_log_loss =run(sess,loss,Dict(feature_columns=> construct_feature_columns(validation_examples), target_columns=>construct_feature_columns(validation_targets)))

# Occasionally print the current loss.
println(" period ", period, ": ", training_log_loss)
weight = run(sess,m)
bias = run(sess,b)

loss_val=run(sess,loss,Dict(feature_columns=> construct_feature_columns(training_examples), target_columns=>construct_feature_columns(training_targets)))

# Add the loss metrics from this period to our list.
push!(training_log_losses, training_log_loss)
push!(validation_log_losses, validation_log_loss)
end

weight = run(sess,m)
bias = run(sess,b)

println("Model training finished.")

# Output a graph of loss metrics over periods.
p1=plot(training_log_losses, label="training", title="LogLoss vs. Periods", ylabel="LogLoss", xlabel="Periods")
p1=plot!(validation_log_losses, label="validation")

println("Final LogLoss (on training data): ", training_log_losses[end])

# calculate additional ouputs
validation_probabilities = run(sess, y, Dict(feature_columns=> construct_feature_columns(validation_examples)));

return weight, bias, validation_probabilities, p1
end
Out[10]:
train_linear_classifier_model (generic function with 1 method)
In [14]:
weight, bias, validation_probabilities,  p1 = train_linear_classifier_model(
0.0005, #learning rate
1000, #steps
50, #batch_size
train_feature_mat,
train_labels,
test_features_mat,
test_labels)
Training model...
LogLoss (on training data):
period 1: 0.6716383366395725
period 2: 0.6520718400586106
period 3: 0.6347970177101597
period 4: 0.6191596702328207
period 5: 0.6047782721033566
period 6: 0.5922131685262155
period 7: 0.580839687465644
period 8: 0.5702423812570189
period 9: 0.5607007472318791
period 10: 0.5519328267238274
Model training finished.
Out[14]:
([-0.36884; 0.331766; … ; 0.127656; 0.181674], 0.038435446715259586, [0.590407; 0.390335; … ; 0.662796; 0.529415], Plot{Plots.GRBackend() n=2})
Final LogLoss (on training data): 0.5519328267238274
In [15]:
plot(p1)
Out[15]:
2468100.560.580.600.620.640.66LogLoss vs. PeriodsPeriodsLogLosstrainingvalidation
The following function converts the validation probabilites back to 0-1-predictions.
In [13]:
# Function for converting probabilities to 0/1 decision
function castto01(probabilities)
out=copy(probabilities)
for i=1:length(probabilities)
if(probabilities[i]<0.5)
out[i]=0
else
out[i]=1
end
end
return out
end
Out[13]:
castto01 (generic function with 1 method)
Let’s have a look at the accuracy of the model:
In [17]:
evaluation_metrics=DataFrame()
false_positive_rate, true_positive_rate, thresholds = sklm.roc_curve(
vec(construct_feature_columns(test_labels)), vec(validation_probabilities))
evaluation_metrics[:auc]=sklm.roc_auc_score(construct_feature_columns(test_labels), vec(validation_probabilities))
validation_predictions=castto01(validation_probabilities);
evaluation_metrics[:accuracy]=accuracy = sklm.accuracy_score(test_labels, validation_predictions)

p2=plot(false_positive_rate, true_positive_rate, label="our model")
p2=plot!([0, 1], [0, 1], label="random classifier");
In [18]:
println("AUC on the validation set: ",  evaluation_metrics[:auc])
println("Accuracy on the validation set: ", evaluation_metrics[:accuracy])
AUC on the validation set: [0.865503]
Accuracy on the validation set: [0.781591]
In [19]:
plot(p2)
Out[19]:
0.000.250.500.751.000.000.250.500.751.00our modelrandom classifier

Task 2: Use a Deep Neural Network (DNN) Model

The above model is a linear model. It works quite well. But can we do better with a DNN model?
Let’s constructa NN classification model. Run the following cells, and see how you do.
In [11]:
function train_nn_classification_model(learning_rate,
steps,
batch_size,
hidden_units,
is_embedding,
keep_probability,
training_examples,
training_targets,
validation_examples,
validation_targets)
"""Trains a neural network classification model.

Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A vector describing the layout of the neural network.
is_embedding: 'true' or 'false' depending on if the first layer of the NN is an embedding layer.
keep_probability: A `float`, the probability of keeping a node active during one training step.
Returns:
p1: Plot of the loss function for the different periods.
y: The final layer of the TensorFlow network.
final_probabilities: Final predicted probabilities on the validation examples.
weight_export: The weights of the first layer of the NN
feature_columns: TensorFlow feature columns.
target_columns: TensorFlow target columns.
"""

periods = 10
steps_per_period = steps / periods

# Create feature columns.
feature_columns = placeholder(Float32, shape=[-1, size(training_examples,2)])
target_columns = placeholder(Float32, shape=[-1, size(training_targets,2)])

# Network parameters
push!(hidden_units,size(training_targets,2)) #create an output node that fits to the size of the targets
activation_functions = Vector{Function}(size(hidden_units,1))
activation_functions[1:end-1]=z->nn.dropout(nn.relu(z), keep_probability)
activation_functions[end] = nn.sigmoid #Last function should be idenity as we need the logits

# create network
flag=0
weight_export=Variable([1])
Zs = [feature_columns]

for (ii,(hlsize, actfun)) in enumerate(zip(hidden_units, activation_functions))
Wii = get_variable("W_$ii"*randstring(4), [get_shape(Zs[end], 2), hlsize], Float32)
bii = get_variable("b_$ii"*randstring(4), [hlsize], Float32)

if((is_embedding==true) & (flag==0))
Zii=Zs[end]*Wii
else
Zii = actfun(Zs[end]*Wii + bii)
end
push!(Zs, Zii)

if(flag==0)
weight_export=Wii
flag=1
end
end

y=Zs[end]
eps=1e-8
cross_entropy = -reduce_mean(log(y+eps).*target_columns + log(1-y+eps).*(1-target_columns))

features_batches, targets_batches = create_batches(training_examples, training_targets, steps, batch_size)

# Standard Adam Optimizer
my_optimizer=train.minimize(train.AdamOptimizer(learning_rate), cross_entropy)

run(sess, global_variables_initializer())


# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
println("Training model...")
println("LogLoss error (on validation data):")
training_log_losses = []
validation_log_losses = []
for period in 1:periods
# Train the model, starting from the prior state.
for i=1:steps_per_period
features, labels = my_input_fn(features_batches, targets_batches, convert(Int,(period-1)*steps_per_period+i), batch_size)
run(sess, my_optimizer, Dict(feature_columns=>construct_feature_columns(features), target_columns=>construct_feature_columns(labels)))
end
# Take a break and compute log loss.
training_log_loss = run(sess, cross_entropy, Dict(feature_columns=> construct_feature_columns(training_examples), target_columns=>construct_feature_columns(training_targets)));
validation_log_loss = run(sess, cross_entropy, Dict(feature_columns=> construct_feature_columns(validation_examples), target_columns=>construct_feature_columns(validation_targets)));

# Occasionally print the current loss.
println(" period ", period, ": ", training_log_loss)

# Add the loss metrics from this period to our list.
push!(training_log_losses, training_log_loss)
push!(validation_log_losses, validation_log_loss)
end


println("Model training finished.")

# Calculate final predictions (not probabilities, as above).
final_probabilities = run(sess, y, Dict(feature_columns=> validation_examples, target_columns=>validation_targets))
final_predictions=0.0.*copy(final_probabilities)
final_predictions=castto01(final_probabilities)

accuracy = sklm.accuracy_score(validation_targets, final_predictions)
println("Final accuracy (on validation data): ", accuracy)

# Output a graph of loss metrics over periods.
p1=plot(training_log_losses, label="training", title="LogLoss vs. Periods", ylabel="LogLoss", xlabel="Periods")
p1=plot!(validation_log_losses, label="validation")

return p1, y, final_probabilities, weight_export, feature_columns, target_columns
end
Out[11]:
train_nn_classification_model (generic function with 1 method)
In [21]:
sess=Session(Graph())
p1, y, final_probabilities, weight_export, feature_columns, target_columns = train_nn_classification_model(
0.003, #learning rate
1000, #steps
50, #batch_size
[20, 20], #hidden_units
false, #is_embedding
1.0, # keep probability
train_feature_mat,
train_labels,
test_features_mat,
test_labels)
Training model...
LogLoss error (on validation data):
period 1: 0.471395788925613
period 2: 0.45770820644301585
period 3: 0.44868498081679836
period 4: 0.4486270877372173
period 5: 0.44886359529693076
period 6: 0.45785272663956894
period 7: 0.4486596600689161
period 8: 0.44640892834409557
period 9: 0.44672501642012086
period 10: 0.44597568632613904
Model training finished.
Out[21]:
(Plot{Plots.GRBackend() n=2}, <Tensor Sigmoid:1 shape=(?, 1) dtype=Float32>, Float32[0.759592; 0.194817; … ; 0.910422; 0.592667], TensorFlow.Variables.Variable{Float32}(<Tensor W_1BDge:1 shape=(50, 20) dtype=Float32>, <Tensor W_1BDge/Assign:1 shape=unknown dtype=Float32>), <Tensor placeholder:1 shape=(?, 50) dtype=Float32>, <Tensor placeholder_2:1 shape=(?, 1) dtype=Float32>)
Final accuracy (on validation data): 0.7854314172566903
In [22]:
plot(p1)
Out[22]:
2468100.4500.4550.4600.4650.470LogLoss vs. PeriodsPeriodsLogLosstrainingvalidation
In [23]:
evaluation_metrics=DataFrame()
false_positive_rate, true_positive_rate, thresholds = sklm.roc_curve(
vec(construct_feature_columns(test_labels)), vec(final_probabilities))
evaluation_metrics[:auc]=sklm.roc_auc_score(construct_feature_columns(test_labels), vec(final_probabilities))
validation_predictions=castto01(final_probabilities);
evaluation_metrics[:accuracy]=accuracy = sklm.accuracy_score(test_labels, validation_predictions)

p2=plot(false_positive_rate, true_positive_rate, label="our model")
p2=plot!([0, 1], [0, 1], label="random classifier");
println("AUC on the validation set: ", evaluation_metrics[:auc])
println("Accuracy on the validation set: ", evaluation_metrics[:accuracy])
AUC on the validation set: [0.871963]
Accuracy on the validation set: [0.785431]
In [24]:
plot(p2)
Out[24]:
0.000.250.500.751.000.000.250.500.751.00our modelrandom classifier

Task 3: Use an Embedding with a DNN Model

In this task, we’ll implement our DNN model using an embedding column. An embedding column takes sparse data as input and returns a lower-dimensional dense vector as output. We’ll add the embedding layer as the first layer in the hidden_units-vector, and set is_embedding to true.
NOTE: In practice, we might project to dimensions higher than 2, like 50 or 100. But for now, 2 dimensions is easy to visualize.
In [14]:
sess=Session(Graph())
p1, y, final_probabilities, weight_export, feature_columns, target_columns = train_nn_classification_model(
0.003, #learning rate
1000, #steps
50, #batch_size
[2, 20, 20], #hidden_units
true,
1.0, # keep probability
train_feature_mat,
train_labels,
test_features_mat,
test_labels)
Training model...
LogLoss error (on validation data):
period 1: 0.5663698194950431
period 2: 0.45220806683037423
period 3: 0.45205136369869175
period 4: 0.44547315482684624
period 5: 0.4489593760718842
period 6: 0.4484996714455285
period 7: 0.44584040864739305
period 8: 0.4457339047966633
period 9: 0.4464958611862487
period 10: 0.4485086547636147
Model training finished.
Out[14]:
(Plot{Plots.GRBackend() n=2}, <Tensor Sigmoid:1 shape=(?, 1) dtype=Float32>, Float32[0.736896; 0.157856; … ; 0.895887; 0.54718], TensorFlow.Variables.Variable{Float32}(<Tensor W_1fwgD:1 shape=(50, 2) dtype=Float32>, <Tensor W_1fwgD/Assign:1 shape=unknown dtype=Float32>), <Tensor placeholder:1 shape=(?, 50) dtype=Float32>, <Tensor placeholder_2:1 shape=(?, 1) dtype=Float32>)
In [15]:
plot(p1)
Out[15]:
2468100.4500.4750.5000.5250.550LogLoss vs. PeriodsPeriodsLogLosstrainingvalidation
Final accuracy (on validation data): 0.7846313852554102
In [16]:
evaluation_metrics=DataFrame()
false_positive_rate, true_positive_rate, thresholds = sklm.roc_curve(
vec(construct_feature_columns(test_labels)), vec(final_probabilities))
evaluation_metrics[:auc]=sklm.roc_auc_score(construct_feature_columns(test_labels), vec(final_probabilities))
validation_predictions=castto01(final_probabilities);
evaluation_metrics[:accuracy]=accuracy = sklm.accuracy_score(test_labels, validation_predictions)

p2=plot(false_positive_rate, true_positive_rate, label="our model")
p2=plot!([0, 1], [0, 1], label="random classifier");
println("AUC on the validation set: ", evaluation_metrics[:auc])
println("Accuracy on the validation set: ", evaluation_metrics[:accuracy])
AUC on the validation set: [0.873001]
Accuracy on the validation set: [0.784631]
In [17]:
plot(p2)
Out[17]:
0.000.250.500.751.000.000.250.500.751.00our modelrandom classifier

Task 4: Examine the Embedding

Let’s now take a look at the actual embedding space, and see where the terms end up in it. Do the following:
  1. Run the following code to see the embedding we trained in Task 3. Do things end up where you’d expect?
  2. Re-train the model by rerunning the code in Task 3, and then run the embedding visualization below again. What stays the same? What changes?
  3. Finally, re-train the model again using only 10 steps (which will yield a terrible model). Run the embedding visualization below again. What do you see now, and why?
In [18]:
xy_coord=run(sess, weight_export, Dict(feature_columns=> test_features_mat, target_columns=>test_labels))
p3=plot(title="Embedding Space", xlims=(minimum(xy_coord[:,1])-0.3, maximum(xy_coord[:,1])+0.3), ylims=(minimum(xy_coord[:,2])-0.1, maximum(xy_coord[:,2]) +0.3) )
for term_index=1:length(informative_terms)
p3=annotate!(xy_coord[term_index,1], xy_coord[term_index,1], informative_terms[term_index] )
end
plot(p3)
Out[18]:
-0.50-0.250.000.250.500.75-0.50-0.250.000.250.50Embedding Spacebadgreatbestworstfunbeautifulexcellentpoorboringawfulterribledefinitelyperfectlikedworsewasteentertaininglovedunfortunatelyamazingenjoyedfavoritehorriblebrillianthighlysimpleannoyingtodayhilariousenjoyabledullfantasticpoorlyfailsdisappointingdisappointmentnothimhergoodtime?.!moviefilmactioncomedydramafamily

Task 5: Try to improve the model’s performance

See if you can refine the model to improve performance. A couple things you may want to try:
  • Changing hyperparameters, or using a different optimizer than Adam (you may only gain one or two accuracy percentage points following these strategies).
  • Adding additional terms to informative_terms. There’s a full vocabulary file with all 30,716 terms for this data set that you can use at: https://storage.googleapis.com/mledu-datasets/sparse-data-embedding/terms.txt You can pick out additional terms from this vocabulary file, or use the whole thing.
In the following code, we will import the whole vocabulary file and run the model with it.
In [30]:
vocabulary=Array{String}(0)
open("terms.txt") do file
for ln in eachline(file)
push!(vocabulary, ln)
end
end
In [31]:
vocabulary
Out[31]:
30716-element Array{String,1}:
"the"
"."
","
"and"
"a"
"of"
"to"
"is"
"in"
"i"
"it"
"this"
"'"

"soapbox"
"softening"
"user's"
"od"
"potter's"
"renard"
"impacting"
"pong"
"nobly"
"nicol"
"ff"
"MISSING"
We will now load the test and training features matrices from disk. Open and run the Conversion of Movie-review data to one-hot encoding-notebook to prepare the IMDB_fullmatrix_datacolumns.jld-file. The notebook can be found here.
In [48]:
using JLD
train_features_full=load("IMDB_fullmatrix_datacolumns.jld", "train_features_full")
test_features_full=load("IMDB_fullmatrix_datacolumns.jld", "test_features_full")
Out[48]:
24999×30716 Array{Float64,2}:
1.0 1.0 1.0 1.0 1.0 0.0 1.0 1.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 … 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 … 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
⋮ ⋮ ⋱ ⋮ ⋮
0.0 1.0 1.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 … 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Now run the session with the full vocabulary file. Again, this will take a long time to finish. It assigns about 50GB of memory.
In [49]:
sess=Session(Graph())
p1, y, final_probabilities, weight_export, feature_columns, target_columns = train_nn_classification_model(
# TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE
0.003, #learning rate
1000, #steps
50, #batch_size
[2, 20, 20], #hidden_units
true,
1.0, # keep probability
train_features_full,
train_labels,
test_features_full,
test_labels)
Training model...
LogLoss error (on validation data):
period 1: 0.37313920231826037
period 2: 0.26477444394028044
period 3: 0.25086412221834486
period 4: 0.2087771610933073
period 5: 0.19335710666766756
period 6: 0.16094674715105722
period 7: 0.15969771663371266
period 8: 0.14275753878385306
period 9: 0.13488925137953908
period 10: 0.11471432399972212
Model training finished.
Out[49]:
(Plot{Plots.GRBackend() n=2}, <Tensor Sigmoid:1 shape=(?, 1) dtype=Float32>, Float32[0.0975639; 0.0719205; … ; 0.993713; 0.678265], TensorFlow.Variables.Variable{Float32}(<Tensor W_1tR5j:1 shape=(30716, 2) dtype=Float32>, <Tensor W_1tR5j/Assign:1 shape=unknown dtype=Float32>), <Tensor placeholder:1 shape=(?, 30716) dtype=Float32>, <Tensor placeholder_2:1 shape=(?, 1) dtype=Float32>)
Final accuracy (on validation data): 0.8656746269850794
In [4]:
#plot(p1)
In [51]:
evaluation_metrics=DataFrame()
false_positive_rate, true_positive_rate, thresholds = sklm.roc_curve(
vec(construct_feature_columns(test_labels)), vec(final_probabilities))
evaluation_metrics[:auc]=sklm.roc_auc_score(construct_feature_columns(test_labels), vec(final_probabilities))
validation_predictions=castto01(final_probabilities);
evaluation_metrics[:accuracy]=accuracy = sklm.accuracy_score(test_labels, validation_predictions)

p2=plot(false_positive_rate, true_positive_rate, label="our model")
p2=plot!([0, 1], [0, 1], label="random classifier");
println("AUC on the validation set: ", evaluation_metrics[:auc])
println("Accuracy on the validation set: ", evaluation_metrics[:accuracy])
AUC on the validation set: [0.939391]
Accuracy on the validation set: [0.865675]
In [3]:
#plot(p2)

Task 6: Try out sparse matrices

We will now convert the feature matrices from the previous step to sparse matrices and re-run our code. The sparse matrices take about 350MB of memory. The code for the NN will still convert the sparse matrix containing the data for the current batch to a full matrix, which leads to a memory requirement of about 35GB.
In [54]:
train_features_sparse=sparse(train_features_full)
test_features_sparse=sparse(test_features_full)
Out[54]:
24999×30716 SparseMatrixCSC{Float64,Int64} with 10780292 stored entries:
[1 , 1] = 1.0
[2 , 1] = 1.0
[3 , 1] = 1.0
[4 , 1] = 1.0
[5 , 1] = 1.0
[6 , 1] = 1.0
[7 , 1] = 1.0
[8 , 1] = 1.0
[9 , 1] = 1.0
[10 , 1] = 1.0

[24977, 30715] = 1.0
[24978, 30715] = 1.0
[24979, 30715] = 1.0
[24981, 30715] = 1.0
[24982, 30715] = 1.0
[24983, 30715] = 1.0
[24984, 30715] = 1.0
[24989, 30715] = 1.0
[24990, 30715] = 1.0
[24993, 30715] = 1.0
[24996, 30715] = 1.0
In [56]:
# For saving the data
#save("IMDB_sparsematrix_datacolumns.jld", "train_features_sparse", train_features_sparse, "test_features_sparse", test_features_sparse)
In [55]:
sess=Session(Graph())
p1, y, final_probabilities, weight_export, feature_columns, target_columns = train_nn_classification_model(
0.003, #learning rate
1000, #steps
50, #batch_size
[2, 20, 20], #hidden_units
true,
1.0, # keep probability
train_features_sparse,
train_labels,
test_features_sparse,
test_labels)
Training model...
LogLoss error (on validation data):
period 1: 0.4264791202329761
period 2: 0.27906764597962996
period 3: 0.24588130824603802
period 4: 0.24137786867062228
period 5: 0.1982403807820223
period 6: 0.195120145753206
period 7: 0.15590201135616108
period 8: 0.145190807054994
period 9: 0.128827185543495
period 10: 0.12583784552275887
Model training finished.
Out[55]:
(Plot{Plots.GRBackend() n=2}, <Tensor Sigmoid:1 shape=(?, 1) dtype=Float32>, Float32[0.302512; 0.50181; … ; 0.977541; 0.652478], TensorFlow.Variables.Variable{Float32}(<Tensor W_1gvhS:1 shape=(30716, 2) dtype=Float32>, <Tensor W_1gvhS/Assign:1 shape=unknown dtype=Float32>), <Tensor placeholder:1 shape=(?, 30716) dtype=Float32>, <Tensor placeholder_2:1 shape=(?, 1) dtype=Float32>)
Final accuracy (on validation data): 0.8696347853914157
In [2]:
#plot(p1)
In [57]:
evaluation_metrics=DataFrame()
false_positive_rate, true_positive_rate, thresholds = sklm.roc_curve(
vec(construct_feature_columns(test_labels)), vec(final_probabilities))
evaluation_metrics[:auc]=sklm.roc_auc_score(construct_feature_columns(test_labels), vec(final_probabilities))
validation_predictions=castto01(final_probabilities);
evaluation_metrics[:accuracy]=accuracy = sklm.accuracy_score(test_labels, validation_predictions)

p2=plot(false_positive_rate, true_positive_rate, label="our model")
p2=plot!([0, 1], [0, 1], label="random classifier");
println("AUC on the validation set: ", evaluation_metrics[:auc])
println("Accuracy on the validation set: ", evaluation_metrics[:accuracy])
AUC on the validation set: [0.940761]
Accuracy on the validation set: [0.869635]
In [1]:
#plot(p2)

A Final Word

We may have gotten a DNN solution with an embedding that was better than our original linear model, but the linear model was also pretty good and was quite a bit faster to train. Linear models train more quickly because they do not have nearly as many parameters to update or layers to backprop through.
In some applications, the speed of linear models may be a game changer, or linear models may be perfectly sufficient from a quality standpoint. In other areas, the additional model complexity and capacity provided by DNNs might be more important. When defining your model architecture, remember to explore your problem sufficiently so that you know which space you’re in.