Deep Learning vs Puzzle Games

Original article was published by Kabalan Gaspard on Deep Learning on Medium

Going supervised: Convolutional Neural Networks

I was, initially, biased against the idea of a supervised approach to Flow Free. First of all, it would require a large sample of solved Flow Free games, which are difficult to find on the public internet. Second, the supervised approach that seemed to most closely match Flow Free — neural networks — is a notorious black box, which would preclude the most interesting part of the exercise: seeing the techniques the algorithm learns to solve the puzzle.

I then however came across an article by Shiva Verma in Towards Data Science where he does something fairly similar with Sudoku: essentially treats a Sudoku board as an image and uses a convolutional neural network (CNN) to solve it. The author reached good results with Sudoku, which caused me to revisit my initial reservations and try this approach for Flow Free nonetheless.

The first hurdle was, of course, getting the input data; solved Flow Free puzzles in parseable text format are a lot harder to find than Sudoku puzzles. The best way I found in the beginning was to look into the code of the Android app, which had just over a thousand puzzles stored in text format:

The initial results of the CNN were disappointing: a flat 0% of puzzles from the test sample solved, although the CNN was learning that it should make paths, rather than just fill out colours in isolation.

We need more data

Tweaking layers, epochs, kernel size, and the other such usual suspects didn’t help much. It was looking like we were back to a data scientist’s most common problem: the best algorithm in the world is nothing without enough data. The best other source of data I found was, with thousands of Flow Free solutions to entirely new puzzles than the ones I had…but in image format.

Despite my numerous attempts to contact them through various channels (and even with a financial incentive offered), I was not able to get them to send me a parseable text version of their solutions. Well, this isn’t an AI article for nothing — who needs the underlying solution matrix when one has modern image processing? Cue a sub-project to build a Flow Free solution image processor:

Scikit-image FTW

Using symmetry to multiply our available data

This yielded a couple of thousand more data points to work with, but that still really wasn’t much. But I then realised that, as far as the CNN is concerned, the exact value of the colour doesn’t matter, just the fact that the colours are different. So we can permute the colours around and still have another non-redundant data point. Even for 5×5 boards which have up to 6 different colours, this gives our CNN as many as 6!=720 times more data points to work with (and of course even more combinations to choose from for larger board with more colours):

Mathematicians will recognise the symmetric group S_n from Group Theory

A friend pointed out that this is in fact a common way to increase data points in game AI. With 720x as many data points, we were finally getting somewhere — 12% accuracy on test data with a 20-epoch model with ~200k data points, that took 20 minutes to run on my 7-year old GPU. Note that we are using a strict criteria for success here: even if the board is off by one cell, we count this as a failure.

However, the failures were much more interesting than the successes here. Out of almost all of the failures, the CNN solved most of the board correctly, enough so that it would be easy for a human to complete it. In this sense, the CNN was able to solve the original premise of the article: to intuitively learn human techniques:

Distribution of errors of the CNN


  • Simpler methods typically remain better for solving grid-based, non-adversarial puzzle games. In fact, as the method gradually got more advanced, the performance got worse.
  • However, although more advanced ML methods don’t solve the puzzle as quickly, they do find interesting insights and help humans get to the better solution — a convolutional neural network did this the best. Moreover, their performance scales better than more traditional solutions.
  • Better data beats better algorithms.

Going further: reader and editor suggestions

I asked some more experienced AI/ML experts (a big thank you to Matt Zucker, Youssef Keyrouz, and Mark Saroufim) to review this article and they suggested trying the following ideas to improve the CNN algorithm. This may be the subject of a Part 2 article, or you can feel free to try them yourselves (as well as the approaches detailed in this article) on

  • Changing the number of layers in the CNN (ablation did not seem to help)
  • In addition to using symmetry to multiply our data points, also using rotations/reflections
  • Using the CNN predictions as features for the reinforcement learning agent