Hyperparameter Tuning Tensorflow 2 Models with Keras-Tuner

Source: Deep Learning on Medium

How To Automate Hyperparameter Tuning of Tensorflow 2 Models with Keras-Tuner

Image credit: Dave Bredeson

Generating deep learning model is highly experimental by nature and design. This experimental feature is also one of the biggest pains of trying to generate deep learning models. It can be time-consuming as well as computationally expensive as one fidgets a lot with lots of tweaks in a bid to get optimal parameters of a model. However, this could (and should) all change very soon with the major release of Keras-Tuner 1.0 by the same team that gave the data science community “deep learning for humans”, Keras!.

What’s the fuzz about Keras-Tuner?

Hyperparameter tuning is a fancy term for the set of processes adopted in a bid to find the best parameters of a model (that sweet spot which squeezes out every little bit of performance). To understand my excitement of this gem of a library, let’s unpack the problem that it aims to solve.

Let’s take a practical and typical deep learning example with a sequential model where some hidden layers as stacked in between the input and output layers. The input and output layers are constrained by the problem at hand so there’s not much of a challenge defining these layers. However, there’s no telling how many layers gives a good balance between the real world and the training world especially with the danger of overfitting (bias-variance tradeoff). This leads to the building of lots and lots of different combinations which are tedious, ineffective and computationally expensive.

Keras-Tuner aims to offer a more streamlined approach to finding the best parameters of a specified model with the help of tuners. These tuners are essentially the agents which will be responsible for searching for that sweet spot.

Currently, Keras-Tuner supports 4 tuners:

  • BayesianOptimisation
  • Hyperband
  • RandomSearch
  • Sklearn

As this post mainly focuses on deep learning models, we will only explore the BayesianOptimisation, Hyperband and RandomSearch tuners. The Sklearn tuner will be left for readers to explore the library with.

Keras-Tuner In Action

Let’s have some fun with our new tool! First, install the package (version 1.0 as of the time of writing this post) from the Python package index:

pip install -U keras-tuner

We need some data to test this library. The divorce data for this test will be the divorce dataset from this paper in 2019. This dataset is ideal for such a post like this as it’s very small by today’s standards so it won’t need so much time to train and validate especially for such an illustrative post. In this research project, the authors looked 54 attributes/features from 170 subjects of which 84 were divorced and 86 were still married.

The 54 features from which each participant gave an input are:

1. If one of us apologizes when our discussion deteriorates, the discussion ends.
2. I know we can ignore our differences, even if things get hard sometimes.
3. When we need it, we can take our discussions with my spouse from the beginning and correct it.
4. When I discuss with my spouse, to contact him will eventually work.
5. The time I spent with my wife is special for us.
6. We don't have time at home as partners.
7. We are like two strangers who share the same environment at home rather than family.
8. I enjoy our holidays with my wife.
9. I enjoy traveling with my wife.
10. Most of our goals are common to my spouse.
11. I think that one day in the future, when I look back, I see that my spouse and I have been in harmony with each other.
12. My spouse and I have similar values in terms of personal freedom.
13. My spouse and I have similar sense of entertainment.
14. Most of our goals for people (children, friends, etc.) are the same.
15. Our dreams with my spouse are similar and harmonious.
16. We're compatible with my spouse about what love should be.
17. We share the same views about being happy in our life with my spouse
18. My spouse and I have similar ideas about how marriage should be
19. My spouse and I have similar ideas about how roles should be in marriage
20. My spouse and I have similar values in trust.
21. I know exactly what my wife likes.
22. I know how my spouse wants to be taken care of when she/he sick.
23. I know my spouse's favorite food.
24. I can tell you what kind of stress my spouse is facing in her/his life.
25. I have knowledge of my spouse's inner world.
26. I know my spouse's basic anxieties.
27. I know what my spouse's current sources of stress are.
28. I know my spouse's hopes and wishes.
29. I know my spouse very well.
30. I know my spouse's friends and their social relationships.
31. I feel aggressive when I argue with my spouse.
32. When discussing with my spouse, I usually use expressions such as ‘you always’ or ‘you never’ .
33. I can use negative statements about my spouse's personality during our discussions.
34. I can use offensive expressions during our discussions.
35. I can insult my spouse during our discussions.
36. I can be humiliating when we discussions.
37. My discussion with my spouse is not calm.
38. I hate my spouse's way of open a subject.
39. Our discussions often occur suddenly.
40. We're just starting a discussion before I know what's going on.
41. When I talk to my spouse about something, my calm suddenly breaks.
42. When I argue with my spouse, ı only go out and I don't say a word.
43. I mostly stay silent to calm the environment a little bit.
44. Sometimes I think it's good for me to leave home for a while.
45. I'd rather stay silent than discuss with my spouse.
46. Even if I'm right in the discussion, I stay silent to hurt my spouse.
47. When I discuss with my spouse, I stay silent because I am afraid of not being able to control my anger.
48. I feel right in our discussions.
49. I have nothing to do with what I've been accused of.
50. I'm not actually the one who's guilty about what I'm accused of.
51. I'm not the one who's wrong about problems at home.
52. I wouldn't hesitate to tell my spouse about her/his inadequacy.
53. When I discuss, I remind my spouse of her/his inadequacy.
54. I'm not afraid to tell my spouse about her/his incompetence.

Using these attributes/features as input features, the researchers were able to generate a predictive more with the highest accuracy of 98.82% from an artificial neural network. The goal of this post is to use our newly acquired tuning agents is to generate a classification model that can at least match this original accuracy score or better yet beat that score.

The script below downloads and partitions the dataset into a training and validation sets and sets the stage for tuners to do their jobs:

With the data now downloaded and prepared, let’s construct the Hyper model with which each of the tuners will have to find the best parameters that give the best performance.

1. Hyperband

Hyperband tuner sets out to select the best parameters with the help of random sampling initially to get a sense of good fits but smartly only persist with the best performing candidates using previous results after the initial results.

A look at the accuracy of the returned model shows that the classifier from the Hyperband achieves a perfect accuracy score of 1. That’s a good start already but let’s see whether the RandomSearch tuner can match this.

loss & accuracy per epoch for the hyperband model

2. RandomSearch

Just as the name connotes, this tuner randomly selects different combinations from all the possible parametric combinations to stumble on a good fit during this random sequence of operations.

The RandomSearch tuner does match the performance of the Hyperband tuner. This result is particularly interesting as this tuner can come in handy for really huge datasets where other tuners could prove too computationally expensive. The final test will be the Bayesian tuner.

loss & Accuracy per epoch for Random model

3. BayesianOptimisation

Currently, there is a bug (which I caught and reported with the fix already merged!) in the BayesianOptimisation tuner in the version 1.0. The fix should be available in the next update but for now, the only way to use updated version is to install the package from source:

git clone https://github.com/keras-team/keras-tuner.git
cd keras-tuner
pip install .

Unsurprisingly and in a typical Bayesian-style, the first generated models perform very badly but improves as new information is fed back until it also converges to the same perfect accuracy score just like the Hyperband and the RandomSearch tuners.

Loss & Accuracy For Bayesian Tuner

Summary

In this post, we have looked at 3 of the tuners (Hyperband, RandomSearch, and BayesianOptimisation) currently supported by Keras-Tuner for hyperparameter tuning. These 3 tuners successfully generated an artificial neural network (ANN) which achieved a perfect and validated accuracy score of 100% while the research paper used for this test could only manage 98.82%.

There are so many tools in current development which hope to help quicken the transition from ideas to production-ready solutions. Keras-Tuner is one of these tools that every data scientist should have in his/her toolbox.