Dear Humans, Are we transferring our biases to the AI systems?

Original article was published by Irfan Ali on Artificial Intelligence on Medium


Dear Humans, Are we transferring our biases to the AI systems?

Photo by Photos Hobby on Unsplash

AI (Artificial Intelligence) is now an important partner in decision-support augmentation for humans and a key vehicle in automation and machine learning.

However, there have been incidents time and again that raises the question:

Does AI reduce the conscious and unconscious bias that is often prevalent in decision making?

Google Photos Tags Two African-Americans As Gorillas Through Facial Recognition Software

Why Its Totally Unsurprising That Amazon’s Recruitment AI Was Biased against Women.”

Machine-bias in risk assessments for criminal sentencing

Apple Card’s Gender-Bias Claims Look Familiar to Old-School Banks

In 2019, a tech entrepreneur took to Twitter to claim that the Apple credit cards gave him a credit limit 20 times that of his wife despite her having a higher credit score. This triggered a Twitterstorm wherein many people, including Apple co-founder Steve Wozniak, noticed that they faced the same bias. He commented, “The same thing happened to us. I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets.”

Over time, it was found that the credit-card algorithm established a pattern that offered a higher credit limit to men with no supporting factors to justify the decision.

In a similar case from 2014, Amazon set up a team in Scotland to build an AI recruitment tool. While building the model’s dataset, they created 500 data models to parse through past résumés from its database and pick up 50,000+ key terms/words. The system would then analyze the web to recommend candidates. The end goal envisaged was that once the resumes were loaded into the AI system, it would give out the list of the top candidates whom Amazon could hire right away.

Within a year of its operations, the team started to encounter a pattern that was troubling and hard to ignore — the system did not like women. This was apparently because the original database had predominantly male résumés submitted to Amazon over a 10-year period. Hence, the recruitment tool concluded that men were preferred.

Even with the team tweaking the system to remedy the AI model’s gender bias, it was uncertain whether they can preempt/ eliminate any new machine bias that would unfairly discriminate against candidates.

As a result, Amazon abandoned this AI recruitment tool in 2017.

In the US, a leading eCommerce portal decided to exclude certain neighborhoods from its same-day delivery system. The system that churned out the decision took these factors into account:

a. The areas/ zip code had to have a sufficient number of premium members.

b. They had to be near a warehouse.

Sufficient people willing to deliver to that zip code.

While this was strategically critical for the organization, it resulted in exclusion of poor and predominantly African-American neighborhoods. Even though it was unintended, the results discriminated against racial and ethnic minorities.

The most significant root causes are:

1. What — The data used to train the AI system:

Many AI systems today feed their databases with a high-volume of data snippets which acts as the “brain “or “memory” of the machine.

Data reflects the social, historical and political conditions in which it was created.

If a dataset is differentiated, it reflects as discrimination in the AI system’s results.

Insufficient training data is another cause of algorithmic bias. If the data used to train the algorithm is more representative of some groups of people than others, the predictions from the model may also systematically ignore unrepresented or under-representative groups.

For example, data for an AI system will contain thousands of dog pictures, with the intent of creating an AI system that can identify a dog’s breed as soon as it sees one — all based on pre-loaded data. If the images that are used for training the AI system are 80% Golden Retrievers and 20% Dalmatians, this will raise a data bias.

In a way, AI Systems are trained to reflect the data they are trained with. The data used could either have a historic bias integrated or may underrepresent certain sections of the population.

2. Who — Trainer of the AI Systems:

The AI Now Institute has also found biases in the people who are creating AI systems.

In a study from April 2019, they found that only 15% of the AI staff at Facebook are women and only 4% of their total workforce are black. Google’s workforce is even less diverse, with only 10% of their AI staff being women and 2.5% of their workers black.

Having an underrepresented race or gender in AI system training could lead to missing out of these sections, which would amplify non-identification of them over a period of the AI system production.

3. How — “Black-Box” AI

AI algorithms developed are not open source, and are completely proprietary to the company that create them.

Algorithms are not only nonpublic, they are actually treated as proprietary trade secrets by many companies,..they can evolve in real time with no paper trail on the data, inputs, or equations used to develop a prediction.”, says Rohit Chopra, an FTC commissioner.

“Even if we could see them, it doesn’t mean we would understand, “It’s difficult to draw any conclusions based on source code.”

says Dipayan Ghosh, the Co-director of the Digital Platforms and Democracy Project and a Shorenstein Fellow at Harvard University, “Apple’s proprietary creditworthiness algorithm is something that not even Apple can easily pin down, and say, ‘Okay, here is the code for this,’ because it probably involves a lot of different sources of data and a lot of different implementations of code to analyze that data in different siloes areas of the company.”

Though it is imperative that AI will be more a norm for business operations in the future, we are witnessing that algorithms that fuel these AI systems are becoming more complex and inexplicable at times, leading to the metaphoric reference of “Black-box” AI.

By adding the dataset bias to this, we can determine there are now many more layers of complexity to evolve the existing AI system and make these systems smarter and without biases. Currently, based on the incidents we are encountering, this seems to be a far-fetched reality.

Many organizations have tasked themselves with solving this issue — AI Now, Brooking Institute, and AlgorithmWatcher, to name a few.

Bosch for example, has pioneered in setting company guidelines for the use of AI (Link). This is also foundationally imperative for Bosch, as by 2025, they aim for all their products to either contain AI or have been developed or manufactured with its help. Michael Bolle, the CDO/CTO of Bosch says,

“If AI is a black box, then people won’t trust it. In a connected world, however, trust will be essential.”

In the immediate future, we will be witnessing organizations reset their mindsets to bring AI in its products and services — with ethics and without bias.

The larger question that begets an answer, if there is one is — what is the remedy?

While there is no universal framework yet, non-discrimination and other civil rights laws are working to redress online disparate impacts; today it is necessary to establish a guidelines & practices primer that identifies and mitigates AI risks and biases.

Taking a leaf out of Brookings Institute’s paper on Algorithm Hygiene, Bosch AI Ethics, tools from Google, IBM and many others, we put together a non-exhaustive list of primer points to overcome the AI basis “Frankenstein” staring at us right now:

1. Develop Bias impact statement: Algorithm operators develop a bias impact statement that offers a template of questions that can flexibly be applied to guide them through the design, implementation, and monitoring phases and assess the algorithm’s purpose, process and production, where appropriate.

As a self-regulatory practice, the bias impact statement can help probe and avert any potential bias that are integrated into or are resultant from the algorithmic decision.

AI Now Institute has already introduced a model framework for governmental entities to create Algorithmic Impact Assessments (AIAs) that evaluate the potential detrimental effects of an algorithm in the same manner as environmental, privacy, data, or human rights impact statements.

1. Add bias testing during algorithm development: A question that algorithm developers will always ask themselves — Will we leave some groups of people worse off as a result of the algorithm’s design or its unintended consequences?

2. Incentivize Algorithm Hygiene: Incentives should also drive organizations to proactively address algorithmic bias. Conversely, operators who create and deploy algorithms that generate fair outcomes should also be recognized by policy makers and consumers who will trust them more for their practices.

When companies exercise effective algorithmic hygiene before, during, and after introducing algorithmic decision-making, they should be rewarded and potentially given a public-facing acknowledgement for best practices.

3. Infuse and Ensure Diversity: Operators of algorithms should also consider the role of diversity within their work teams, training data, and the level of cultural sensitivity within their decision-making processes.

Employing diversity in the design of algorithms will trigger and potentially avoid harmful discriminatory effects on certain protected groups, especially gender, racial and ethnic minorities. While the immediate consequences of bias in these areas may be quantitatively insignificant, the sheer quantity of digital interactions and inferences can amount to a new form of systemic bias.

Therefore, the operators of algorithms should not discount the possibility or prevalence of bias and should seek to have a diverse workforce developing the algorithm, integrate inclusive spaces within their products, or employ “diversity-in-design,” where deliberate and transparent actions will be taken to ensure that cultural biases and stereotypes are addressed upfront and appropriately.

Adding inclusivity into the algorithm’s design can potentially vet the cultural inclusivity and sensitivity of the algorithms for various groups and help companies avoid what can be litigious and embarrassing algorithmic outcomes.

4. Regular Audit for Bias: The formal and regular auditing of algorithms to check for bias is another best practice for detecting and mitigating bias. Audits prompt the review of both input data and output. When done by a third-party evaluator, they can provide insight into the algorithm’s behavior. While some audits may require technical expertise, this may not always be the case.

Facial recognition software that misidentifies persons of color more than whites is an instance where a stakeholder or user can spot biased outcomes, without knowing anything about how the algorithm makes decisions.

5. Unravel the AI algorithm Complexity: The need for explainable AI is the new preamble for AI operators and developers. Some of the tools that are available — AI Fairness 360 Open Source Toolkit by IBM and the What-IF tool by Google — use code-free probing of the datasets in an ML model which could help to break down the complexity of the algorithm.

In addition to the above, widespread algorithmic literacy is crucial for mitigating bias. Given the increased use of algorithms in many aspects of daily life, all potential subjects of automated decisions would benefit from the knowledge of how these systems function.

Just as computer literacy is now considered a vital skill in the modern economy, understanding how AI systems use their data may soon become necessary.

In conclusion: We have witnessed that AI models can develop bias and function in the same behaviors as its human creators — akin to children adapting their thought process’ to the environment, experiences and data points they absorb.

Hence, it is important for organizations and their AI developers to be extremely cognizant of the data being used to train the AI system, who trains them and how we establish the right checks and balances to prevent these biases from emerging.

After all, it is also our responsibility as a society and industry to re-learn from past AI model outputs to make future models more robust, ethical, and fair.