Tag Archives: julia,optimization,constraint-programming

Constraint Solver: Support for linear objectives

This is part 17 of the series about: How to build a Constraint Programming solver?

In this series I’m building a constraint solver from scratch in Julia.

If you are new to this blog and are interested in constraint programming, there are several posts already waiting for you:

but that is a long list so I’ll make a short post again for Part 20. Explaining what I’ve programmed so far and what is still missing.
I’m doing this in my free time but would like to spend more and more time programming this project as well as blogging about it. If you want to support this project and the blog in general I would be amazed if you become a patron. If you do I will create a better/faster blog experience and you will get posts like this two days in advance! Everything will be free forever but if you have some cash to spare -> This might be an option 😉

Special limited offer only for X days… I either just lost your attention or I have it now 😀
I decided to offer a 1:1 (binary) talk where you can ask me about my work and just chat with me if you’re willing to join the Patron 4$ club. 😉 This offer exists until the 10th of February.

Implementing support for linear objectives

If you want to just keep reading -> Here you go 😉

Since the last part the solver supports real coefficients but it only supports to minimize/maximize a single variable the next reasonable step is to support linear objective functions in general. As you might remember all the variables are discrete which means we can’t minimize something like \(0.5x+7.2y\) as of now.

Feel free to check out all the code changes needed to make this possible on GitHub PR #61.

This post explains the needed changes to make it work. At first it seems like an easy problem again as we only need to change the get_best_bound functionality. We only had single_variable_objective here.

Now we add a function called linear_combination_objective into the same file src/objective.jl.

Where we use the fact that we have a linear combination so the maximum sum is adding the maximum terms together which consist of the maximum value for each variable times the coefficient, if the coefficient is positive and otherwise take the minimum value.

In code on Github

function linear_combination_objective(com::CS.CoM, var_idx::Int, val::Int)
    objective = com.objective
    indices = objective.lc.indices
    coeffs = objective.lc.coeffs
    objval = objective.constant
    if com.sense...

Constraint Solver: Dealing with real coefficients!

This is kind of part 16 of the series about: How to build a Constraint Programming solver?
Even if you are not deep into constraint programming this post tries to show you what kind of problems occur when you deal with floating point numbers.
If you are new to this blog and are interested in constraint programming, there are several posts already waiting for you:

I’m a student and do this blog during my free time but would like to spend more and more time programming this project as well as blogging about it. If you have a very generous day you might want to go over the top and finance a bit of my dream 🙂 You can do so by becoming a patron. If you do I will create a better/faster blog experience and you will get posts like this two days in advance! Everything will be free forever but if you have some cash to spare -> This might be an option 😉

Special limited offer only for X days… I either just lost your attention or I have it now 😀
I decided to offer a 1:1 (binary) talk where you can ask me about my work and just chat with me if you’re willing to join the Patron 4$ club. 😉 This offer is until the 10th of February.

Start of the story

If you want to just keep reading -> Here you go 😉

After merging the JuMP part, Mathieu Besançon had a deeper look into the codebase and found out that I don’t support real number constraints like:

\[
0.2x+0.1y = 0.3z\]

At first I thought: Oh yeah I just have to change the type of some structs to support it. Well I forgot the problems using floating point numbers…

You might have seen it before

\[
0.2+0.1 \neq 0.3\]

Yes we are living in a new decade but we still need to deal with the problems that computers can’t calculate even though it’s kind of the thing everyone thinks they are much better than we humans…

There are way better posts and explanations out there why computers don’t sum up \(0.2+0.1\) to \(0.3\) but instead to something like \(0.30000000000000004\).

Have a look at StackOverflow for an answer on this question or search for “How floating point numbers work” on Google -> LMGTFY.

It might be obvious how this applies to the problem in this constraint solver project but maybe a short look back on how these equality constraints are tackled is helpful:

\[
0.2x+0.1y = 0.3z\]…

Constraint Solver: How to use JuMP for your own solver?

First of all: Happy new year to everyone!

This is kind of part 15 of the series about: How to build a Constraint Programming solver?
But it is also interesting for you if you don’t care about constraint programming.
You can learn how to use JuMP.jl as the modelling layer for your own solver.

Anyway these are the other posts so far in the ongoing series:

All of that can only happen because of the generous support of some people.
I’m very happy that I can cover my server expenses now because of my Patrons.
To keep this going please consider supporting this blog on Patreon. If you do I will create a better/faster blog experience and you will get posts like this earlier! Everything will be free forever but if you have some cash to spare -> This might be an option 😉

Special limited offer… I either just lost your attention or I have it now 😀
I decided to offer a 1:1 (binary) talk where you can ask me about my work and just chat with me if you’re willing to join the Patron 4$ club. 😉 This offer is until the 10th of February.

If you want to just keep reading -> Here you go 😉

Maybe I should have done that from the ground up but wasn’t sure how to and probably it was easier for the blog as well to start without a big code base. Therefore here it is now: Using JuMP.jl as the modelling layer for the constraint solver I’m creating for the past couple of months.

UserInterface change

Before I explain how it’s done I want to show how the user interface changes first:

How it was before

com = CS.init() # creating a constraint solver model
com_grid = Array{CS.Variable, 2}(undef, 9, 9)
for (ind,val) in enumerate(grid)
    if val == 0
        com_grid[ind] = add_var!(com, 1, 9)
    else
        com_grid[ind] = add_var!(com, 1, 9; fix=val)
    end
end

for rc=1:9
    #row
    variables = com_grid[CartesianIndices((rc:rc,1:9))]
    add_constraint!(com, CS.all_different([variables...]))
    #col
    variables = com_grid[CartesianIndices((1:9,rc:rc))]
    add_constraint!(com, CS.all_different([variables...]))
end

which is a bit of the sudoku part. Now it looks like this:

m = Model(with_optimizer(CS.Optimizer))
@variable(m, 1 <= x[1:9,1:9] <= 9, Int)
# set variables
for r=1:9, c=1:9
    if grid[r,c] != 0
        @constraint(m, x[r,c] == grid[r,c])
    end
end
for rc = 1:9
    @constraint(m, x[rc,:] in CS.AllDifferentSet(9))
    @constraint(m, x[:,rc] in CS.AllDifferentSet(9))
end

I think it looks a lot cleaner now but more importantly it is the standard interface for mathematical optimization solvers written in Julia.
This also means that if there will exist other constraint solvers in the future one can easily swap the…