Big Tech Starts Taking AI Ethics Seriously, and So Should You

Original article was published by Evelien AI on Artificial Intelligence on Medium


Big Tech Starts Taking AI Ethics Seriously, and So Should You

The last couple of months have shown once more that addressing racism and bias is long overdue.

And amidst protests about discrimination and police violence, concerns are being raised about the increasingly bigger role of artificial intelligence systems in our lives. Systems that have shown to be less neutral than we’d like to believe.

Facial recognition systems have, for example, become the topic of heated discussions again. These systems have shown to work much better for white people and Caucasians than people of color and minorities. Since then, Amazon and Microsoft have chosen to exit the police market.

Why ethics is inseparable from AI systems

As AI systems are starting to become more powerful, companies are looking to apply it to an increasing number of industries and problems.

AI is not just being used to detect broken cups anymore, but also identify promising job candidates, malignant tumors, and repeat offenders.

The problem is: the data used to train these systems is often rife with societal bias, which is repeated in the output. As the scholar Ivana Bartoletti states in her new book An Artificial Revolution: On Power, Politics and AI: “Data reflects the past. Now we are using it to shape the future. That’s a big problem.”

Money — still a man’s world?

A telling example is a scandal last year involving a partnership between the bank Goldman Sachs, and Apple. They launched a new credit card that used AI algorithms to calculate someone’s spending limit.

As soon as the card was launched male customers started complaining that their wives received much lower credit limits, despite better credit scores. David Heinemeier Hansson from Basecamp tweeted that he received a spending limit 20 times as high as that of his wife:

The problem turned out to be a correlation between gender and assigned credit limits in the training data, as men historically have gotten much higher limits than women. Even after excluding gender as a variable from the data set, it kept producing biased results.

Data judging based on the color of your skin

Using artificial intelligence in law enforcement has led to much controversy as well and for good reason.

“Predictive policing” programs, used by the US police force to predict recidivism, have shown to judge black defendants as being at a much higher risk of re-offending, while stating the reverse for white defendants.

In May 2016, the organization ProPublica reported this to be true for COMPAS, the predictive algorithm widely used in the States. And the AI Now Institute, connected to the New York University, showed that many AI systems used in US law enforcement are trained with “dirty” data sets.

The risk of hidden correlations in health care predictions

Use of artificial intelligence in health care is on the rise too. And while it has great potential to improve the detection and prevention of diseases, bias in the data carries the risk of underserving minority groups.

An example is a health care risk-prediction algorithm used in US hospitals to identify chronically ill patients benefitting from more intensive care. The goal was to prevent further complications from arising and reduce costs in the long run.

But because it used health care costs as a proxy for the amount of care needed, it identified black and minority patients as needing less care than white patients, despite having similar needs.

These disparate results were due to black and minority patients being more likely to be poor and therefore not spending as much on health care as white patients. Experiencing racism has also kept many black patients from seeking the care they need.

How to deal with these issues?

These examples show how biased data can seep into the outcomes generated by artificial intelligence, and the dire consequences it can have.

Being aware of the huge potential of artificial intelligence to help your business, how can you as a company take steps to address these issues before they arise?

1. Introduce characteristics into the system that are likely to lead to bias

One way to minimize the risk of unfair outcomes is to explicitly account for characteristics in your data set that are likely to lead to bias.

If you are going to use artificial intelligence in law enforcement, human resources, or any other field where bias is likely, there’s a big chance you will have to work with biased data.

The gender-discriminating credit card by Goldman Sachs showed making an algorithm neutral by omitting “risky” variables is not possible. Even without it, bias shows up in the relationships between the other variables.

A 2018 collaboration between computer scientists and economists found that actually reintroducing variables like gender and ethnicity allows you to control and reverse this, making the outcomes fairer.

And Sean Higgins from the Northwestern University showed with a 2019 study that, indeed, reintroducing gender for Goldman Sach’s credit card would give women much more accurate creditworthiness predictions.

2. Test your AI system with a diverse group of people

Another way to spot it in time when your AI system generates discriminatory outcomes is by testing with a diverse group of people that accurately reflects your user base.

A larger issue in product development is not testing with a diverse enough sample, or not testing at all. There can be multiple reasons for this, from not having a good understanding of who your customers are, to not having it as a priority.

However, investing in testing with the right target groups helps you to create a better product that does not only treat your users fairly but also gives them the best experience possible.

3. Become more inclusive & diverse as a company

A long-term solution to make sure you create products that are fair to all is by making inclusion and diversity part of your company culture.

Especially when your customer base is large and diverse, this is incredibly valuable. Your team will be able to spot potential issues in time and design for diversity and fairness from the get-go. On top of that, team diversity contributes to being more innovative and coming up with creative solutions.

It might take longer to implement than the previous suggestions, but it’s worth it, and will make your company more future-proof!

4. Use outcomes from AI systems in collaboration with a professional

Currently, in many cases, it’s wise to use AI systems in combination with the judgment of a professional.

For example: when using it to spot diseases, AI systems can help to spot undetected risk factors and aid doctors in their understanding of the patient’s health. But the doctor is still in the best position to take all factors into account and give a final diagnosis.

By combining artificial intelligence with the expertise of seasoned professionals, you can combine the benefits of automation and pattern detection with the ability to synthesize information and make ethical judgments.

AI—it might not be perfect, but it’s the future

As the possibilities of artificial intelligence are starting to get more powerful and implemented in more and more fields, we will probably hit upon more ethical issues that we will have to address.

As you can see, not taking these issues into account can cause serious consequences. By being aware of the possible ethical issues around creating AI systems, you can spot these in advance and take the right measures to ensure your products create fair outcomes for all your customers.

Want to know more about AI ethics and how you can go about making sure that you’re implementing AI in the right way? Check out the following resources: