The Forcefulness of Neural Networks

Original article was published on Artificial Intelligence on Medium

The Forcefulness of Neural Networks

Learning Relationships using the Neural Network

In Machine Learning, Neural Networks are an instrumental class of models that has wide-ranging applications. Generally, a neural network contains numerous parameters that are collectively used to infer predictions or forecasts from relevant input information.

Neural networks allow us to create sophisticated relationships and answer practical questions such as:

(i) Could we understand the traits of a job applicant based on the applicant’s written responses before a formal interview?

(ii) Instead of sifting through a full catalogue of furniture, could we use photographs of furniture to automatically shortlist those that meet a consumer’s requirements?

The above examples make the logical assumption that input data possesses information that have meaningful relationships to what we want to predict, i.e. the outputs. For (i), this would mean phrases like “lead by example” and “I’ll do it first” indicate that the applicant is likely to have leadership traits. For (ii), this would mean images of furniture contain information that can be extracted to match consumer requirements such as “wooden legs”, “glass table top” and “at least 3 square metres”. As we can imagine, the power of neural networks can be harnessed to efficiently perform tasks and reduce costly human effort.

Forcing Relationships using the Neural Network

But, what happens if there are no relationships between the input information and the output predictions? Although it makes no sense to train such a neural network, could we actually train it?

Let us use the Gaussian distribution to create random input data on a number of features. At the same time, let us also create random output categories by sampling uniformly from a number of categories. Essentially, we create an artificial dataset that contains random input information and random categorical outputs. We then pretend this is a meaningful dataset and use it to train a neural network.

Yes, we are able to train neural networks using random inputs and outputs!

Training a neural network using randomly generated datasets with different number of features
  1. As we train the neural network over a longer duration, its prediction accuracy of random categories from random input data increases. The neural network is capable of forcing spurious relationships between the random input and output training data!
  2. Furthermore, as we increase the number of input features, the neural network is capable of achieving a higher final accuracy. At a sufficiently large number of features (240 or more), it is even able to train to almost 100% accuracy on data that is randomly generated! This is because, when we have more input features, there are more possibilities for the neural network to force spurious relationships!

Conclusion

Neural networks have the power to create sophisticated meaningful relationships. If we are not careful on the dataset, this power can also forcefully create spurious relationships.