Detecting Pulsar Stars in Space using Artificial Neural Networks

Original article was published on Deep Learning on Medium

3. Creating and Fitting the Artificial Neural Network model

This is where the fun starts. Now it’s time to actually create the ANN. The two libraries we will be using to create, train, and evaluate our ANN model, are TensorFlow and Keras. We will be using TensorFlow 2.0 which has Keras integrated into it.

Sequential is our ANN model class. A Dense layer is a regular, deeply connected layer in which each perceptron receives input from all the perceptrons in the previous later. A Dropout layer, on the other hand, is used to prevent overfitting, as it randomly shuts off the output of a select percentage of perceptrons from the previous layer. In this model, there is one input layer, nine hidden layers, and one output layer. The activation function used for the input and hidden layers is the Rectified Linear Unit function which is basically a (0, max) function.

For the output layer, the Sigmoid activation function has been used, which outputs a value between 0 and 1. The purpose of using Sigmoid is to get a probability of the candidate being a Pulsar. The key to creating the perfect ANN model is experimentation.

The optimizer that has been used is Adam. Adam is an optimizer that changes the learning rate while performing Gradient Descent during Backpropagation. As for the loss function, since this is a binary classification, the Binary Cross-Entropy loss function has been utilized. The idea of a Neural Network is that the loss is calculated after the feed-forward, and that loss function is minimized with respect to each weight during Bacakpropogation.

I have also used EarlyStopping in order to prevent overfitting. EarlyStopping monitors a certain metric(in this case the loss of the validation/test data) and stops further fitting the model when the metric it is monitoring starts changing for the worse (if the loss starts increasing or the accuracy starts decreasing).

Finally, fitting the model. While using a Callback such as EarlyStopping, it is good to specify a higher number of Epochs (One epoch is when the entire dataset is passed forward and backward once). So to fit the model, we specify the training data, the label, the number of epochs, the validation data (which we want to monitor with our Callback), and finally our Callback(s).

Train on 12528 samples, validate on 5370 samples
Epoch 1/1000
12528/12528 [==============================] - 2s 137us/sample - loss: 0.5806 - val_loss: 0.4118
Epoch 2/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.4070 - val_loss: 0.2503
Epoch 3/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.2952 - val_loss: 0.1446
Epoch 4/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.2473 - val_loss: 0.1195
Epoch 5/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.2267 - val_loss: 0.1084
Epoch 6/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.2090 - val_loss: 0.1027
Epoch 7/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1990 - val_loss: 0.1006
Epoch 8/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1961 - val_loss: 0.1025
Epoch 9/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1895 - val_loss: 0.1014
Epoch 10/1000
12528/12528 [==============================] - 1s 54us/sample - loss: 0.1752 - val_loss: 0.0987
Epoch 11/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1814 - val_loss: 0.1027
Epoch 12/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1680 - val_loss: 0.0973
Epoch 13/1000
12528/12528 [==============================] - 1s 54us/sample - loss: 0.1760 - val_loss: 0.1003
Epoch 14/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1658 - val_loss: 0.0990
Epoch 15/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1683 - val_loss: 0.1007
Epoch 16/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1631 - val_loss: 0.0992
Epoch 17/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1628 - val_loss: 0.0996
Epoch 18/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1684 - val_loss: 0.0967
Epoch 19/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1664 - val_loss: 0.0978
Epoch 20/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1564 - val_loss: 0.0957
Epoch 21/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1569 - val_loss: 0.0969
Epoch 22/1000
12528/12528 [==============================] - 1s 54us/sample - loss: 0.1555 - val_loss: 0.0968
Epoch 23/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1594 - val_loss: 0.1002
Epoch 24/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1597 - val_loss: 0.0949
Epoch 25/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1636 - val_loss: 0.0989
Epoch 26/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1492 - val_loss: 0.0944
Epoch 27/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1588 - val_loss: 0.0974
Epoch 28/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1535 - val_loss: 0.0970
Epoch 29/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1602 - val_loss: 0.0993
Epoch 30/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1514 - val_loss: 0.1014
Epoch 31/1000
12528/12528 [==============================] - ETA: 0s - loss: 0.154 - 1s 55us/sample - loss: 0.1556 - val_loss: 0.0952
Epoch 32/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1541 - val_loss: 0.0940
Epoch 33/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1542 - val_loss: 0.0935
Epoch 34/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1545 - val_loss: 0.0942
Epoch 35/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1498 - val_loss: 0.0923
Epoch 36/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1474 - val_loss: 0.0924
Epoch 37/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1474 - val_loss: 0.0966
Epoch 38/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1446 - val_loss: 0.0947
Epoch 39/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1573 - val_loss: 0.0975
Epoch 40/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1485 - val_loss: 0.0948
Epoch 41/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1514 - val_loss: 0.0923
Epoch 42/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1464 - val_loss: 0.0917
Epoch 43/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1475 - val_loss: 0.0918
Epoch 44/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1432 - val_loss: 0.0944
Epoch 45/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1546 - val_loss: 0.0917
Epoch 46/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1512 - val_loss: 0.0966
Epoch 47/1000
12528/12528 [==============================] - 1s 63us/sample - loss: 0.1530 - val_loss: 0.0928
Epoch 48/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1469 - val_loss: 0.0920
Epoch 49/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1509 - val_loss: 0.0941
Epoch 50/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1514 - val_loss: 0.0929
Epoch 51/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1504 - val_loss: 0.0932
Epoch 52/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1518 - val_loss: 0.0936
Epoch 53/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1475 - val_loss: 0.0927
Epoch 54/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1573 - val_loss: 0.0963
Epoch 55/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1423 - val_loss: 0.0913
Epoch 56/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1484 - val_loss: 0.0919
Epoch 57/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1491 - val_loss: 0.0923
Epoch 58/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1478 - val_loss: 0.0908
Epoch 59/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1488 - val_loss: 0.0939
Epoch 60/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1521 - val_loss: 0.0934
Epoch 61/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1413 - val_loss: 0.0901
Epoch 62/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1506 - val_loss: 0.0922
Epoch 63/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1425 - val_loss: 0.0900
Epoch 64/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1525 - val_loss: 0.0912
Epoch 65/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1520 - val_loss: 0.0940
Epoch 66/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1521 - val_loss: 0.0957
Epoch 67/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1499 - val_loss: 0.0930
Epoch 68/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1466 - val_loss: 0.0919
Epoch 69/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1528 - val_loss: 0.0915
Epoch 70/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1476 - val_loss: 0.0924
Epoch 71/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1477 - val_loss: 0.0911
Epoch 72/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1575 - val_loss: 0.0926
Epoch 73/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1532 - val_loss: 0.0922
Epoch 74/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1525 - val_loss: 0.0909
Epoch 75/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1532 - val_loss: 0.0920
Epoch 76/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1447 - val_loss: 0.0911
Epoch 77/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1529 - val_loss: 0.0895
Epoch 78/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1514 - val_loss: 0.0903
Epoch 79/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1468 - val_loss: 0.0908
Epoch 80/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1452 - val_loss: 0.0891
Epoch 81/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1501 - val_loss: 0.0898
Epoch 82/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1484 - val_loss: 0.0915
Epoch 83/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1579 - val_loss: 0.0924
Epoch 84/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1436 - val_loss: 0.0952
Epoch 85/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1461 - val_loss: 0.0927
Epoch 86/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1401 - val_loss: 0.0896
Epoch 87/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1467 - val_loss: 0.0941
Epoch 88/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1613 - val_loss: 0.0971
Epoch 89/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1451 - val_loss: 0.0896
Epoch 90/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1482 - val_loss: 0.0916
Epoch 91/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1509 - val_loss: 0.0936
Epoch 92/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1513 - val_loss: 0.0928
Epoch 93/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1472 - val_loss: 0.0909
Epoch 94/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1475 - val_loss: 0.0918
Epoch 95/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1494 - val_loss: 0.0949
Epoch 96/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1465 - val_loss: 0.0900
Epoch 97/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1465 - val_loss: 0.0894
Epoch 98/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1404 - val_loss: 0.0897
Epoch 99/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1496 - val_loss: 0.0914
Epoch 100/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1396 - val_loss: 0.0869
Epoch 101/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1502 - val_loss: 0.0894
Epoch 102/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1542 - val_loss: 0.0917
Epoch 103/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1479 - val_loss: 0.0901
Epoch 104/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1493 - val_loss: 0.0891
Epoch 105/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1498 - val_loss: 0.0903
Epoch 106/1000
12528/12528 [==============================] - 1s 62us/sample - loss: 0.1497 - val_loss: 0.0939
Epoch 107/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1479 - val_loss: 0.0908
Epoch 108/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1458 - val_loss: 0.0906
Epoch 109/1000
12528/12528 [==============================] - 1s 62us/sample - loss: 0.1444 - val_loss: 0.0892
Epoch 110/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1439 - val_loss: 0.0928
Epoch 111/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1456 - val_loss: 0.0888
Epoch 112/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1402 - val_loss: 0.0964
Epoch 113/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1520 - val_loss: 0.0981
Epoch 114/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1537 - val_loss: 0.0999
Epoch 115/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1485 - val_loss: 0.0911
Epoch 116/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1550 - val_loss: 0.0906
Epoch 117/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1447 - val_loss: 0.0883
Epoch 118/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1516 - val_loss: 0.0892
Epoch 119/1000
12528/12528 [==============================] - 1s 53us/sample - loss: 0.1533 - val_loss: 0.0896
Epoch 120/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1456 - val_loss: 0.0897

So as you can see, the model has finished being fit and it was run for only 120 epochs. You can also see how the loss and validation loss (val_loss) are generally reducing with each epoch. This shows that our model is becoming better adapted to our data. However, if the validation loss spikes or doesn’t reduce, while the loss does, it means our model has been overfitted. Our model is perfectly fit (mainly due to our Dropout Layers and the Callback), as a validation loss of 0.087 is extremely good.