Ethics of Artificial Intelligence

Original article was published by Abhishek Dabas on Artificial Intelligence on Medium


Ethics of Artificial Intelligence

Bias & Fairness

Check out my previous blogs if you want to read more and understand topics such as what is bias, its real-world use cases, and the mitigation methods.

  1. Bias in AI
  2. Algorithmic Bias in real-world
  3. Bias Mitigation — Methods

We will conclude with an overview of Bias & Fairness in AI with some key takeaways:

  1. A lot of people think that AI is some kind of magic when it is not!! It reflects the data(training data). AI system takes the examples(datasets) and learns from it. The essential thing is to know when and where to use AI models. “If there is nothing to learn from learning is impossible”.
  2. The best way to check the model or the system is to Test It!!! (just like we test everything else, if you need a driving license you give a driving test for that, to prove your driving skills are good enough for you to drive on the road). We trust what we have experienced before if a calculator works well and gives us the right results, then we don’t need to get into how it works but just trust it, as it never failed us!! There are many methods to do this check here.
  3. Ensuring that AI systems can effectively make decisions and researchers focus on these long term goals with AI by considering the impact that the technology can make towards the real-world in the near future!! Such things can increase research and standards to reduce bias in AI.
  4. There are still a lot of biases that we are not even aware of. More than 180 Human biases have been found. Read more here
  5. Bias can ever be totally removed. Even the attempt to remove bias creates bias of its own — it’s a myth to even try to achieve a bias-free world!!
  6. Because there are different kinds of bias and it is impossible to minimize all kinds simultaneously, this will always be a trade-off. The best approach will have to be decided on a case by case basis, by carefully considering the potential harms from using the algorithm to make decision
  7. Machine learning, by nature, is a form of statistical discrimination: we train machine learning models to make decisions (to discriminate between options) based on past data. I think it’s debatable!!
  8. The more we learn about bias in AI, the more we learn about bias in Humans, which can ultimately help us, humans, in making fair decisions.
  9. In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality. Without the concepts of time, space, and causality, much of common sense is impossible.New York Times
  10. Taking this problem into consideration, we face a choice. We can stick with today’s approach to A.I. and greatly restrict what the machines are allowed to do (we end up with autonomous-vehicle crashes and machines that perpetuate bias rather than reduce it). Or we can shift our approach to A.I. in the hope of developing machines that have a rich enough conceptual understanding of the world that we need not fear about their operation.
  11. If the underlying data is inherently biased or doesn’t contain a diverse representation of the target groups, the AI algorithms cannot produce accurate and fair outputs.
  12. ML Models should reflect the data and the data should reflect the reality.
  13. A lot of companies are working on building Responsible AI, which shows that the future is bright for AI.
  14. AI Now: A research institute at New York University that examines the social implications of artificial intelligence. Bias and inclusion is one of their core themes for research.
  15. EU general data protection regulation contains a right to explanation, which has raised the concern for building an accountable and responsible system.
  16. Data really really matters, we need to know things like:
  • Understanding the skews and correlations within the data
  • Testing amongst multiple training and testing dataset
  • combining data source from multiple sources
  • specifying held out test set for hard use cases
  • domain expertise to identify additional signals