Tag Archives: ordinary least squares

Parallel Processing in Julia: Bootstrapping the MLE

By: Bradley Setzler

Re-posted from: https://juliaeconomics.com/2014/06/18/parallel-processing-in-julia-bootstrapping-the-mle/

* Scripts that reproduce the results of this tutorial in Julia are located here and here.

To review our extended example, we first simulated the OLS DGP, solved for the MLE of OLS, bootstrapped the MLE, and computed various non-parametric p-values.

The only part of our previous work that may have slowed down your computer was forming the 10,000 bootstrap samples (samples). In this post, we will see how to speed up the bootstrapping by using more than one processor to collect the bootstrap samples.


The Estimator that We Wish to Bootstrap

We are interested in the following MLE estimator, adapted from our previous work. First, we import the two needed packages and generate the data:

# Prepare
srand(2)
using Distributions
using Optim
using DataFrames
N=1000
K=3

# Generate variables
genX = MvNormal(eye(K))
X = rand(genX,N)
X = X'
X_noconstant = X
constant = ones(N)
X = [constant X]
genEpsilon = Normal(0, 1)
epsilon = rand(genEpsilon,N)
trueParams = [0.01,0.05,0.05,0.07]
Y = X*trueParams + epsilon

# Export data
data = DataFrame(hcat(Y,X))
names!(data,[:Y,:one,:X1,:X2,:X3])
writetable("data.csv",data)

Next, we define the likelihood of OLS:

function loglike(rho,y,x)
    beta = rho[1:4]
    sigma2 = exp(rho[5])+eps(Float64)
    residual = y-x*beta
    dist = Normal(0, sqrt(sigma2))
    contributions = logpdf(dist,residual)
    loglikelihood = sum(contributions)
    return -loglikelihood
end

Finally, we define the bootstrap function, which includes within it a wrapper for the likelihood:

function bootstrapSamples(B)
    println("hi")
    samples = zeros(B,5)
    for b=1:B
		theIndex = sample(1:N,N)
		x = X[theIndex,:]
		y = Y[theIndex,:]
		function wrapLoglike(rho)
			return loglike(rho,y,x)
		end
		samples[b,:] = optimize(wrapLoglike,params0,method=:cg).minimum
	end
	samples[:,5] = exp(samples[:,5])
    println("bye")
    return samples
end

The reason for the cute println statements will become apparent later. Save all of this in a file called “bootstrapFunction.jl”. Finally, we import the data created above:

data = readtable("data.csv")
N = size(data,1)
Y = array(data[:Y])
X = array(data[[:one,:X1,:X2,:X3]])

Parallel Bootstrapping

We now have the function, bootstrapSamples, saved in the file “bootstrapFunction.jl”, that we wish to run in parallel. The idea is that, instead of running the MLE B times on one processor, we run it B/4 times on each of four processors, then collect the results back to the first processor. Unfortunately, Julia makes it very difficult to parallelize functions within the same file that the functions are defined; the user must explicitly tell Julia every parameter and function that is needed on each processor, which is a frustrating process of copy-pasting the command @everywhere all around your code.

Instead, we separate our work into two files, the first one you have already created. Create a second file in the same location on your computer as “bootstrapFunction.jl”. In the second file, write the following commands:

addprocs(4)
require("tutorial_5_bootstrapFunction.jl")
B=10000
b=2500
samples_pmap = pmap(bootstrapSamples,[b,b,b,b])
samples = vcat(samples_pmap[1],samples_pmap[2],samples_pmap[3],samples_pmap[4])

The command addprocs(4) means to add four processors into your Julia session. The command require is the key to getting this right: it ensures that all of the functions and parameters contained in “bootstrapFunction.jl” are available to each processor. pmap is the function that actually applies bootstrapSamples to each value in the list [b,b,b,b]. I set the list next to it to be [b,b,b,b], where b=2500, so that bootstrapEstimates would collect 2,500 bootstrap samples on each processor. The resulting matrices from all four processors are stored in the length-4 vector called samples_pmap. Finally, we vertically stack (vcat) the four matrices into one matrix, samples, which is exactly the matrix of bootstrap samples we have seen previously.

Tip: For parallel processing in Julia, separate your code into two files, one file containing the functions and parameters that need to be run in parallel, and the other file managing the processing and collecting results. Use the require command in the managing file to import the functions and parameters to all processors.


Results

Once you have run the outer file, you can do speed tests to compare the two ways of getting 10,000 bootstrap samples. We use the @elapsed command to time each code. Here are the results:

@elapsed pmap(bootstrapSamples,[b,b,b,b])
	From worker 2:	hi
	From worker 3:	hi
	From worker 4:	hi
	From worker 5:	hi
	From worker 2:	bye
	From worker 5:	bye
	From worker 4:	bye
	From worker 3:	bye
66.528345161

Each of the processors tells us “hi” and “bye” when it begins and finishes its job, respectively. The total time was a bit more than a minute (66.5 seconds). By comparison, the following code will compute 10,000 bootstrap samples on only one processor, as in our previous work:

@elapsed bootstrapSamples(10000)
hi
bye
145.265103729

So, in the absence of parallel processing, the same result would require just over 145 seconds, or almost two and a half minutes. Thus, parallel processing on a four-core personal computer allowed us to reduce the time required to bootstrap our estimator by a factor of around 2.2. As a general rule, parallel processing of the bootstrap estimator saves you more time when the function being bootstrapped is more complicated.


Bradley J. Setzler

Introductory Example: Ordinary Least Squares (OLS) Estimation in Julia

By: Bradley Setzler

Re-posted from: http://juliaeconomics.com/2014/06/15/introductory-example-ordinary-least-squares/

* The script to reproduce the results of this tutorial in Julia is located here.

In this post, I show in Julia how to perform ordinary least squares (OLS) estimation after first simulating the OLS data generating process (DGP). At the end, we see that the parameter estimates converge to the true parameter as sample size grows large. If you have not yet installed Julia, it takes 5 minutes following these instructions.

As a reminder and to clarify notation, the OLS DGP is,

Y = X\beta + \epsilon,

where Y is the Nx1 dependent variable, X is the N\mathrm{x}\tilde{K} matrix of independent variables, \beta is the \tilde{K}\mathrm{x}1 vector of parameters that we wish to estimate, and \epsilon is the N\mathrm{x}1 error satisfying \epsilon \overset{\mathit{i.i.d.}}{\sim} \mathcal{N}\left(0,\sigma^2\right). Because we assume that the first column of X is the constant number 1, we will find it useful below to work with K \equiv \tilde{K}-1. The least squares estimator is,

\hat{\beta} = \left( X^T X \right)^{-1} \left( X^T Y \right).


Matrix Algebra and Simulated Random Variables: The OLS DGP

First, we generate the independent variables X, then we use X to generate the independent variable Y. To begin, create a new file in Julia Studio and save it to your computer. In the file editor (script), insert the following commands:


using Distributions
N=1000
K=3
genX = MvNormal(eye(K))
X = rand(genX,N)
X = X'
X_noconstant = X
constant = ones(N)
X = [constant X]

The using command let Julia know that we will be using the Distributions package. The MvNormal() command initialized a multivariate normal distribution, which is an object including methods such as pdf for the probability distribution function and rand for drawing random variables.  eye(K) means \mathit{I}_K, the identity matrix of size K\mathrm{x}K. We only told MvNormal the covariance matrix, leaving the mean blank, which Julia assumes means that we would like a zero mean. The distribution of X is arbitrary; we only used multivariate normal for simplicity. At the end, we concatenate the vector of ones to X using brackets, [].

Tip: To ensure that the matrices are of the appropriate dimension, use the size command. Above, we transposed X after finding that rand returned it as a 3\mathrm{x}N matrix, when we need it to be N \mathrm{x} 3. Misaligned dimensions are one of the most common and frustrating errors to make in writing a program.

Now that we have created X as a matrix containing a column of ones as well as three independent variables, we wish to multiply it by a vector of regression coefficients of length 4 (including the intercept) and add the normally distributed shock, \epsilon. For simplicity, we assume \sigma^2=1.

genEpsilon = Normal(0, 1)
epsilon = rand(genEpsilon,N)
trueParams = [0.1,0.5,-0.3,0.]
Y = X*trueParams + epsilon

Matrix algebra in Julia can be done as in a way comparable to Python, *(A,B), which means AxB, or in the more R-like way we used above, A*B. Then, you can click run (the little green arrow in Julia Studio) and it will perform the operations in the file above. To make sure it worked, you can now go to the Console and type,

julia> mean(Y)

and press Enter. If the code worked correctly, this should return the mean of simulated Y, which should be near the true intercept 0.1 (since each of the independent variables has mean zero, the true mean of Y is just the intercept).


Functions in Julia: The OLS Estimator

Functions are defined in Julia using the command function, followed by the desired name of your function, and parentheses containing the arguments of the function. An end statement is required upon completion of the function definition. Indentation is required within the body of the function, and it is a good practice to explicitly include a return statement. The OLS estimator is defined with respect to any particular parameters as follows:

function OLSestimator(y,x)
    estimate = inv(x'*x)*(x'*y)
    return estimate
end

This function uses the dot product (*) three times, the transpose () twice, and the inverse (inv()) once. This function works for any matrices x and y with the appropriate dimensions. Once you have defined this function by running the file, you can obtain the OLS estimates of the true parameters by typing,

julia> estimates = OLSestimator(Y,X)

Because of the return statement, the parameter estimates will be returned by OLSestimator() and stored in estimates. If you compute estimates in the script, you can print them to the screen using,

println(estimates)

Finally, change the parameter defined at the beginning of your code. When N is small (say, 100), estimates will usually be further from trueParams than when N is large (say 10,000). This should be very easy to change; if you used the print statement for the estimates, just change the value of N, run the code, and see the new estimates printed to the console. In order to make your estimates reproducible (i.e., the exact same draws from the random distributions), set the random seed at the beginning of your code using,

srand(2)

where 2 is a possible seed you could choose.


Results

When I run the code above with random seed 2, I find that,

julia> estimates
4-element Array{Float64,1}:
0.11216
0.476437
-0.290574
0.0108337

so you can use this to check your results.

If you are only trying to estimate OLS, you can use the built-in command linreg(), but do not include the vector of ones, as linreg() will add another vector of ones. This is why I created X_noconstant above, which is just X without a column of ones. The syntax is,

julia> linreg(X_noconstant,Y)
4-element Array{Float64,1}:
0.11216
0.476437
-0.290574
0.0108337

and the estimates are identical to those in estimates.


Bradley J. Setzler