Where you should drop Deep Learning in favor of Constraint Solvers

Original article can be found here (source): Deep Learning on Medium

Next, the second hard constraint: for every patient k:

  • Either he is in a unique bed j in a unique hospital i,
  • Either he is at home.

In the same way, can be translated into an integer inequality:

Finally, this constraint can be added to the model.

# Each person must be placed in at most one bed
for k in range(n_patients):
inner_sum = []
for i in range(n_hospitals):
inner_sum.append(sum(x[(i,j,k)] for j in range(n_beds_in_hospitals[i])))
model.Add(sum(inner_sum) <= 1)

Soft constraints

Next, there are soft constraints. Those are highly desired: our solution must try to satisfy them as much as possible, yet they are not essential to find a solution:

  • Every sick person should be placed into a bed,
  • Every person should be handled by the nearest hospital,
  • Sick persons in severe condition should be handled first when there are not enough beds.

While hard constraints are modeled as equalities or inequalities, soft constraints are expressions we want to minimize or maximize.

Let Ω be the set of all solutions that satisfy the hard constraints.

Every sick person should be placed into a bed means to maximize the number of occupied beds.

Every person should be handled by the nearest hospital means to minimize the distance between every patient and his assigned hospital.

Sick persons in a severe condition should be handled first when there are not enough beds means to maximize the total severity of all handled patients. By denoting sev(k) the severity of the patient k:

Then we can reduce all the soft constraints into a single objective:

One needs to be careful then: these soft constraints don’t have the same domain.

  • Patients maximization constraint ranges from 0 to n, with n the number of patients,
  • Severity constraint ranges from 0 to 5n
  • Distance constraint ranges from 0 to the maximum euclidian distance for all i and k.

Given that all of these constraints share the same priority, we must define penalty factors to equilibrate the different constraints.

Here is the corresponding code:

# Integer distance function
idist = lambda xy1, xy2: int(((xy1[0]-xy2[0])**2 + (xy1[1]-xy2[1])**2)**0.5)
# Gain factors (1/penalty factors)
gain_max_patients = 140
gain_severity = int(140/5)
gain_distance = -1
# Maximization objective
soft_csts = []
for i in range(n_hospitals):
for j in range(n_beds_in_hospitals[i]):
for k in range(n_patients):
factor = \
gain_max_patients \
+ gain_distance * idist(hospitals_loc[i], patients_loc[k]) \
+ gain_severity * patients_severity[k]
soft_csts.append(factor * x[(i,j,k)])


Now we can launch the solver. It will try to find the optimal solution within a specified time limit. If it can’t manage to find the optimal solution, it will return the closest sub-optimal solution.

solver = cp_model.CpSolver()
solver.parameters.max_time_in_seconds = 60.0
status = solver.Solve(model)

In our case, the solver returns an optimal solution in 2.5 seconds (2).

Fig. 2: Solution returned by the solver


To create this solution, all it takes is 1 hour of research and 30 minutes of programming.

For a Deep Learning counterpart, one can predict a few days of data cleansing, at least a day to test different architectures and another day for training.

Moreover, a CP-SAT model is very robust if well modelized. Below are the results with different simulation parameters (3). Results are still coherent in many different cases, and with increased simulation parameters (3000 patients, 1000 beds), solution inference took a little less than 3 minutes.

Fig. 3: Different simulation parameters

Of course, CSPs hardly apply to topics like computer vision and NLP, where Deep Learning is sometimes the best approach. However, in logistics, scheduling and planning, it is often the way to go.

Special thanks to Laurent Perron, and his Operations Research team at Google for their fantastic work, and for the time they take to answer technical questions on StackOverflow, GitHub and Google Groups.

Antoine Champion, Apr. 1st 2020


[1] Jingchao Chen, Solving Rubik’s Cube Using SAT Solvers, arXiv:1105.1436, 2011.

[2] Biere, A., Heule, M., and van Maaren, H. Handbook of satisfiability, volume 185. IOS press, 2009a

[3] Knuth, D. E., The art of computer programming, Volume 4, Fascicle 6: Satisfiability. Addison-Wesley Professional, 2015

[4] Vipin Kumar, Algorithms for constraint-satisfaction problems: a survey, AI Magazine Volume 13, Issue 1, 1992.