Source: Deep Learning on Medium
Machine Learning : the capacity of a computer to learn from experience, i.e. to modify its processing on the basis of newly acquired information.
To understand how machines learn we must understand how to setup an algorithm to handle this process, a common schematic I like to use is Rosenblatt’s schematic of a Perceptron.
HOW IT WORKS
The model takes the input(s) [x1, x2, x3] and multiplies them by the weight(s) [w1, w2, w3] which equals the net inputs.
The weights are initialised as random values between -1, 1 but are adjusted throughout the process
Here is the most crucial part of this supervised learning process, here we compare the result, which as the weights are random are most likely wrong, with our desired or expected result and adjust the weights according to the error. This after multiple cycles which are known as epochs refines the weights. The weights are often stored after millions of iterations in many cases to increase performance.
An extra is the learning rate, which is a value that slows the speed of learning to avoid it jumping too far around the answer — it may over compensate. This value is usually between 0 and 1 i.e 0.1 .
The net inputs run through an activation function which decides whether to fire or not exactly like a neuron. This one is a binary step function, tech jargon for either 0 or 1, other types are parabolic i.e curves. But the binary step function is easier to understand for those less experienced.
This is then given to the system i.e website to use!