Original article was published on Artificial Intelligence on Medium
Implementing Logic Gates using Neural Networks (Part 1)
Today, I will be discussing the applications of neural networks and how they can be used as logic gates. Logic gates form the basis of any complex calculations that we perform from addition to subtraction to integration and even derivation. Logic gates such as OR, NOR, AND, NAND and XOR are prevalent and a lot of us have come across with their respective truth tables at some point in our life.
Starting off with Neural Networks
Above is the figure of a simple neural network. Here, we have 2 input neurons or x vector having values as x1 and x2. The input neuron with value 1 is for the bias weight. The input values i.e x1, x2, and 1 are multiplied with their respective weight matrix which is W1, W2, and W0 respectively. The corresponding value is then fed to the summation neuron where we have the summed value which is
The value so obtained is fed to a neuron which has a non-linear function(sigmoid in our case) for scaling the output to a desirable range. The output range of sigmoid is 0 if the output is less than 0.5 and 1 if the output is greater than 0.5.
Implementing OR gate
The truth table of the OR gate is as follows
The table on the left depicts the truth table of the OR gate. For the given two inputs, if any of the input is 1, the output or y is 1. The graph on the right shows the plotting of all the four pairs of input i.e (0,0), (0,1), (1,0), and (1,1). The network below is the implementation of a neural network as an OR gate.
The input to the sigmoid equation is Z. Now, if the value of Z is less than 0 then the output will be less than 0.5 and hence will be classified as 0. If the value of the input is greater than 0, then the output will be greater than 0.5 and will, therefore, be classified as 1. All this boils down to one question and that is given we have the input vector x, what should be the value of weight vector to achieve this task?
Consider a situation in which the input or the x vector is (0,0). The value of Z, in that case, will be nothing but W0. Now, W0 will have to be less than 0 so that the output or ŷ is 0 and the definition of the OR gate holds water. Hence, we can say with a resolution that W0 has to be a negative value.
Now, consider a situation in which the input or the x vector is (0,1). Here the value of Z will be W0+0+W2*1. This being the input to the sigmoid function should have a value greater than 0 so that the output is greater than 0.5 and is classified as 1. Henceforth, W0+W2>0. if we take the value of W0 as -1(remember the value of W0 has to be negative) and the value of W2 as +2, the result comes out to be -1+2 and that is 1 which seems to satisfy the above inequality.
Similarly, for the (1,0) case, the value of W0 will be -1 and that of W1 can be +2.
The line separating the above four points, therefore, be an equation W0+W1*x1+W2*x2=0 where W0 is -1 and both W1 and W2 are +2. The equation of the line of separation of four points is therefore x1+x2=1/2. The implementation of the NOR gate will, therefore, be similar to the just the weights being changed to W0 equal to 1, and that of W1 and W2 equal to -2.
The upcoming blogs will follow the implementations of the remaining logic gates (AND, NAND, and XOR).