Original article was published on Artificial Intelligence on Medium
Debiasing AI: Towards Equitable and Accountable AI
Bias has been the undercurrent of our society for a long time now. People have tended to grudgingly live with it but every now and then certain events catapult this into the public conscious.
Currently, the disproportionate deaths in the Black and Minority Ethnic (BME) triggered by the disproportionate hit of COVID-19 lockdown on people who are from the lower economic strata. All this has simmered over with the worldwide protest post the murder of George Floyd by the police. What are all these protests trying to achieve is is they are attempting to get meaningful changes in public policy. These changes are to help remove systemic bias. As and when the changes are agreed and legislated, the changes will need to be codified into various systems. These could range from systems which scan job applications, determine the risk profile of patients etc. So we will need to de-bais the formulation, execution and evaluation of public policy using AI.
“Debiasing humans is harder than debiasing AI systems.” — Olga Russakovsky, Princeton
We know this can be challenging as no technology is good or bad in itself. The main challenge with AI is that the bias is being built into the systems. The historical data is used to train the AI, and ML systems are inherently biased because of human bias. Thus AI algorithms suffer from the ambiguity of human decision making and ideological clashes. Other types of biases include bias arising from biased human decisions, historical bias, and social inequities. The challenge with human bias is that most of the preferences are so ingrained that we at times don’t recognise it ourselves.
Few examples of this bias are:-
- Natural Language Processing Models run on various news articles have tended to demonstrate gender bias.
- Facial analysis algorithms had higher error rates concerning minorities and more so for minority women, because of unrepresentative training data.
So it is essential to train the AI and ML to recognise and eliminate bias. This notion that AI is the culprit s just a mechanism for us to absolve ourselves of the responsibility.
Following are few of the mechanisms which can be considered for removing bias in AI:-
- De-bias relative to humans: We need to unbias the data by disregarding data variables that do not accurately predict the outcome or bias of the result. An example of this is to not use race or gender in the algorithm to determine the suitability of the candidate for a job or for a government scheme. The advantage of using algorithms is that by eliminating the variables inducing bias, it helps reduce the subconscious bias which humans are prone to.
- Contradictory to the above approach is not to remove the bias causing variables but use that to understand the inbuilt bias in the dataset. For, e.g. an algorithm can detect the bias in the news articles in the example mentioned above and mitigate against it
- AI algorithms can be used as a positive force by ensuring transparency and audibility. Unbiasing is not a one-time activity but has to be an ongoing approach. Analytics on the outcome of AI algorithms enables the detection and attribution of correlation of bias inducing variables and mitigation against them.
- Disparate benefits can also be obtained through improved detection using Red teams. Red teams in a cybersecurity context are focused on penetration testing of different systems and their levels of security programs. Hence, similar teams can be used to test the bias in AI algorithms.
- The way to deal with this bias at a fundamental level is to educate data scientist and sensitise them to how bias gets introduced or can be detected.
Technology and humans need to help each other to remove biases and play our part in moving towards a more equitable society. Artificial Intelligence (AI) and Emotional Intelligence (EI) needs to come together to remove bias and have Equitable and accountable AI.
Would love to hear your thought about how you have sought to de-bias AI in the comments below.