Original article can be found here (source): Artificial Intelligence on Medium
The algorithm achieved the following results for each table.
- table 1 → nodes expanded: 372 , time elapsed: 0.0109 sec
- table 2 → nodes expanded: 12161, time elapsed: 0.4089 sec
- table 3 → nodes expanded: 193, time elapsed: 0.0049 sec
- table 4 → nodes expanded: 392, time elapsed: 0.0119 sec
- table 5 → nodes expanded: 1904541, time elapsed: 65.9218 sec
As we can see, the second and the fifth table trace a huge number of nodes in the search tree. Now let’s optimize the problem and reduce the number of the nodes.
CSP stands for Constraint Satisfaction Problem. Therefore, our main goal to design such algorithm is to satisfy all the well defined constraints which the problem introduces. In order to create a CSP algorithm we need to indicate three properties of our problem. Variables, Domains and Constraints. Each variable is a piece of problem which needs to be assigned to an appropriate value in order to solve the problem. Domain indicates that which values can be assigned to a specific variable. And finally, constraints indicate which of the values existing in the domain could be used in the moment. Let’s try this technique on our Sudoku problem.
The three properties of the problem is defined as follow:
- Variables: Each empty cell on the board
- Domains: For each cell a domain is defined as a set of numbers between 1 and 9 except the numbers which are already used in the current row, column or 3 x 3 squares.
- Constraints: No redundant numbers in rows, columns and the 3 x 3 squares.
So now the outline of our CSP algorithm is defined and it is time to start optimizing the backtracking algorithm.
Firstly, we need a an array of all the domains of all variables. In other words a space is needed to keep the remaining values for each variable. Therefore, a property called “rv” will be added to our class and it will be referred as “self.rv” based on python OOP further in the code. I decided to replace the domain of fix values on the board with [‘x’] just in case to mark the cell includes a fixed number. In other cases I’ll check the Sudoku criteria in order to find the appropriate values for a cell and append it to the self.rv list:
self.dim = dim
self.expandedNodes = 0
with open(fileDir) as f:
content = f.readlines()
self.board = [list(x.strip()) for x in content]
self.rv = self.getRemainingValues()def getDomain(self,row,col):
RVCell = [str(i) for i in range(1 ,self.dim + 1)]
for i in range(self.dim):
if self.board[row][i] != ‘0’:
if self.board[row][i] in RVCell:
RVCell.remove(self.board[row][i]) for i in range(self.dim):
if self.board[i][col] != ‘0’:
if self.board[i][col] in RVCell:
RVCell.remove(self.board[i][col]) boxRow = row — row%3
boxCol = col — col%3
for i in range(3):
for j in range(3):
if self.board[boxRow+i][boxCol+j] in RVCell:
return RVCelldef getRemainingValues(self):
for row in range(self.dim):
for col in range(self.dim):
if self.board[row][col] != ‘0’:
Now that the information is ready to use, we need to choose an empty cell or in other words a variable as the second move. But is this important? Indeed it is. For this problem the simplest approach is to first fill out the cells with smaller domains. For instance, if a cell’s domain is  and another cell’s domain is [1, 2, 9], it is obvious that filling the cell with a domain size of 1 is better since that’s the only choice and it is definitely right. If we expand the idea, if you choose a value from the small domain set you have a high probability of choosing the right value. Let’s call our function getNextMRVRowCol() which MRV stands for Minimum Remaining Value. As I mentioned we marked the fixed values as ‘x’ so we first check if a cell is fixed and also if the domain is empty we return 10 as a large number out of the problem space in order to prevent the agent from choosing an empty domain as the minimum remaining value cell.
if 'x' in lst or lst == :
return len(lst)def getNextMRVRowCol(self):
rvMap = list(map(self.getDomainLength,self.rv))
minimum = min(rvMap)
if minimum == 10:
index = rvMap.index(minimum)
return(index // 9, index % 9)
There we go, now we have a function which chooses the best spot to fill for us. Since we stored the domains linearly we have to compute the row and columns with a division and modulo operation on each self.rv index. Until here, we have just converted our backtracking problem to a CSP problem and the only optimization we have done is to change our location finder(which is also known as a heuristic function). This improves our results a little bit. Let’s use a beautiful technique called the “Forward Checking” in order to achieve the results that we need.
Forward Checking is simply providing a vision for the program to increase the probability of making a profit out of its choice in the higher levels of the tree. For example, imagine that there are two cells v1 and v2 in a row with domains d1=[1,2] and d2=. Our location function chooses the v1. In the normal case the program chooses 1 first, the new value of d2 would be d2= as a result and in the deeper layer it finds out that there are no possible values for the cell with domain d2 and it backtracks. If you remember our goal was to reduce the number of expansion of nodes which includes the backtracking move. As a solution, we can first check by choosing 1, does it eliminate the possible choices for the other one? We will see that it eliminates all the remaining values for d2. At this point we are informed that choosing 1 definitely costs a backtracking move as a result and therefore we choose 2 as the possible answer.
It is possible to implement forward checking for more than one step. However, it may cause a serious time overhead. In this case, I preferred to implement a method that checks by choosing a certain value for a cell whether it eliminates the possible opportunities for other cells existing on the board. Wrapping up all these together, here’s the final code:
element = self.rv.pop(row*9 + col)
if  in self.rv:
return Falsedef solveCSPFH(self):
location = self.getNextMRVRowCol()
if location == -1:
row = location
col = location
for choice in self.rv[row*9+col]:
choice_str = str(choice)
self.board[row][col] = choice_str
cpy = copy.deepcopy(self.rv)
self.rv = self.getRemainingValues()
if not self.isEmptyDomainProduced(row,col,choice_str):
self.board[row][col] = ‘0’
self.rv = cpy return False
Now we are finished and are algorithm is ready to be tested. The following results were achieved from the test on previous tables:
- table 1 → nodes expanded: 44 , time elapsed: 0.0598 sec
- table 2 → nodes expanded: 137 , time elapsed: 0.1136 sec
- table 3 → nodes expanded: 76 , time elapsed: 0.0798 sec
- table 4 → nodes expanded: 81 , time elapsed: 0.0708 sec
- table 5 → nodes expanded: 93889 , time elapsed: 82.6698 sec
This is really fantastic! We just reduced the amount of node expansion by almost 85%. If you notice in some cases the elapsed time is more than the backtracking algorithm. This is the overhead that I mentioned earlier. Although we reduced are search space but the search time increased due to the analysis we applied to the problem. The forward checking method we used was an example of a method with a time overhead. Although, in small problems it does not make any difference and in some cases like table 2 it even works pretty much faster but in complicated problems as the fifth case it differs for 17 seconds. A better way is to only check the domain of the values in the same row and column of the selected cell, not the whole table!
Furthermore, you can implement heuristic functions in order to guess which value is better to choose in a certain variable domain. For instance, your heuristic function could choose the value which was repeated in the table the least. If your method is consistent it will definitely improve your results.
Although the backtracking algorithms tend to solve any problem in theory, it requires a large space of memory and consumes a lot of time. CSP algorithms were introduced in order to shrink the large space and boost the algorithms. With good Forward Checking algorithms and consistent heuristic functions high speed problem solving with low memory requirement would be possible.
I hope you find this article useful. You can find my code on my gitlab.
Special thanks to my dear teacher Dr. Ziarati for assigning the project.