Machine Learning for Ambiguity

Original article was published by runyan on Deep Learning on Medium


Machine Learning for Ambiguity

Beyond Achieving Top Accuracy

We are familiar with the idea of using machine learning to make predictions and inferences to high accuracies. This is after all a big part of what machine learning is expected to do.

Interestingly and importantly, we can leverage further on machine learning models. Beyond using the model to make top accuracy predictions, we can use it to create Uncertainty, Ambiguity or even Contention.

Don’t We Like Clarity and Certainty?

Not all the time. For good motivations,

  • Testers may want to create examples that are uncertain, so we can stress test a system for its result or even its decision on borderline inputs.
  • Designers may be interested to visualise a hybrid prototype that combines existing products, but which we do not have a definite specification right now.
  • Trainers may want to create content that is ambiguous so that there are no straightforward answers, which therefore encourage participants to engage in debate.

The applications are plentiful.

Real Example

Let us use the MNIST dataset that contains images of the ten digits from 0 to 9. We then train a model to at least 98% accuracy, that is, a model that correctly predicts images to their corresponding digits at least 98% of the time.

Instead of stopping here, we further leverage on the model — we use the model to apply ambiguity to the digits.

Let us confuse the digit “3” with “8”.

The machine learning model filled in the blanks. It dotted two spots on the left of 3. Now, it is not unreasonable for a person to contend this 3 as an 8.

Let us confuse the digit “4” with “9”.

The model attempted to build a roof over the 4 to make it closer to a 9. The model is conscious not to complete the entire roof as its goal is to make the digit uncertain between 4 and 9.

Let us confuse the digit “6” with “5”.

The model cut 6 in the middle and pulled out the resulting loose end to create the hook in 5. Additionally, it sketched a short stroke at the top of 6 to make it look like 5. It is now uncertain if the digit is 6 or 5.

How does the Model Do It?

By training the model to an accuracy of 98%, it has understood what digit images should look like. We could then ask it to engineer its knowledge and show us how digits that are uncertain, ambiguous or contentious look like.

For example, it is akin to asking “Could you create something that is in between “5” and “6” since you already know how they individually look like?”

Last Thoughts

There are many good motivations for creating uncertainty, ambiguity or even contention, some of which have been explained above. Almost surely, there are also ways that uncertainty, ambiguity and contention can be used to harm or exploit! The following question probably gives us a better appreciation — could uncertain data be used as an attack against my automated system and cause it to produce an undesired decision?