Source: Deep Learning on Medium
Artificial intelligence is transforming life as we know it today
You’re munching on your breakfast cereal. ‘Yesterday, my friend Diana told me that soon we’re all going to be unemployed because of AI,’ you squawk. ‘Is that true?’
‘I don’t think so,’ I reply, and take a sip of my morning coffee. ‘But it might make us more creative.’
‘But Diana said that Artificial Intelligence will be able to make legal contracts or spot cancer,’ you counter. ‘So the people doing that will lose their jobs, right?’
‘They won’t have those specific tasks, that’s for sure,’ I return.
‘But think about how much time they’ll have for other things! A notary could spend more time with his clients instead of setting up legal documents. A skin doctor could spend more time with his patients or on research instead of staring at pictures of moles.’
You loudly crunch on your cereal. ‘Yeah but won’t it be naive to think that everyone will find something else to do?’ you inquire with your mouth full.
‘It would be equally naive to think that all these people will be out of work,’ I retort. ‘Two hundred years ago, 80 percent of all people were farmers. Now it’s less than one percent. Does that mean that 79 percent of people are unemployed today?’
‘Of course not,’ you grumble.
‘Exactly,’ I continue and take another sip of coffee. ‘It’s just that repetitive tasks are being automated away. That frees our time for other activities. Activities that require creativity and that are fun to do.’
‘So do you think that with AI we’re heading towards a future that is all fun and games?’ you hit back.
‘That depends whether we manage to make AI work for all of us,’ I respond. ‘A lot of algorithms are really biased these days. That needs to change.’
‘I’ve heard about that,’ you comment. ‘But I don’t really understand it. I mean, it’s a machine. How can a machine be racist, for example?’
‘There are a bunch of reasons why a machine could exhibit racist behavior,’ I respond. ‘The engineer who makes the algorithm could have biases themselves and implement them in the algorithm. Programmers might not even be aware of this — after all, most of them are still straight white males.’
‘Yeah but not all straight white males are bigots or racists,’ you point out.
‘Of course not,’ I nod. ‘But when the vast majority of your officemates are straight, white and male, you can feel really out of place if you don’t conform to that.’
You slurp on your almond milk, then lick your lips. ‘You mean, some developers might be building AI to preserve white heteropatriarchy?’
‘And they can do it because there are not enough queer, black, female or non-binary programmers to counteract it,’ I add.
‘That’s horrible!’ you blurt out. ‘But what if we had a really diverse field of AI developers — then we surely wouldn’t get racist results?’
‘It depends. Even with a diverse team of developers, you could get a racist AI. For example, if you feed garbage data to an algorithm, its output will also be garbage’, I reply. ‘AI algorithms rely on heavy amounts of data to learn its features — and if that data is biased, you get biased outcomes.’
No algorithm can create good things from crappy data
‘Take for example Microsoft’s Twitter bot, Tay,’ I continue. ‘It was designed to speak like an 18- to 24-year old and engage in innocent conversations. But less than 24 hours later, it was spouting out sexist, racist and antisemitic nonsense!’
If your eyes looked a bit dopey earlier on, they’re wide open now. ‘What the heck’, you interject.
‘Yeah. But ok, those are Twitter conversations and even though they surely hurt, AI can cause more harm than that. It can have life-altering consequences’, I continue.
‘For example, judges are using AI to help them with criminal sentencing. But in the United States, a black man is five times more likely to be imprisoned than a white man. If we feed this dataset to the algorithm, it will suggest more severe punishments for black men.’
‘So basically AI gets racist because humans have been racist for so long?’ you ask.
‘Precisely’, I affirm. ‘This can happen because I have biased data, or missing data. For example, when you search for ‘woman’ on Google, you get a lot of white women and few others. Do it now!’
You tap around on your phone, then look back up to me. ‘You’re right!’ you exclaim.
‘Now imagine that you teach a computer what a woman looks like, based on this data,’ I prompt.
You give me a wavering look and shovel another spoon of cereal into your mouth. ‘It will learn that only light-skinned women are real women?’ you finally answer.
‘Bingo. And what happens now if you show a picture of a dark-skinned woman to your algorithm?’ I ask.
‘Erm. Crazy things?’ you counter.
‘Yep, crazy things’, I reply. ‘The algorithm will sort the picture of your dark-skinned woman to a different category. This is why a Google algorithm mislabeled two black people as gorillas! The developers of the algorithm were probably not bad people, but their dataset was biased.’
Your spoon drops into your bowl with a loud shatter. ‘That’s crappy!’ you blurt out.
‘Yes’, I profess. I take my last glug of coffee and put my mug into the sink. ‘But we have to chance to fix that now.’
‘We’re all doomed’, you groan and slurp your glass empty.
‘It depends’, I respond. ‘We could use this opportunity to learn from our past mistakes. There are lots of different things we could work on, but in my opinion it boils down to three key factors.’
‘Tell me’, you prompt, putting your bowl into the sink.
‘Number one — we need make sure that the algorithms are not racist themselves. For that, we need to invest in diversity in AI development. In all of tech, in fact. We need to crack down harder on racist attacks and sexual harassment at the workplace. We need to remove all the obstacles so that non-straight, non-white, non-male people can realize their full potential while working in tech.’
‘Number two?’ you prod.
‘We need to make sure we use unbiased datasets to feed our algorithms. First, we need to establish a gold standard of what an unbiased dataset is. Second, if we only have biased datasets — which is often the case — we need to develop tools to correct them. Third, we need to hold developers accountable if they use datasets that don’t meet our standards.’
You nod. ‘And number three?’ you push.
‘Lots of people work in monotonous jobs and have completely forgotten about their creativity. We need to invest in institutional creativity training of people who would lose their monotonous job to AI. And we need to build a system where human creativity is valued. When AI is everywhere, a creative job cannot be a risky career path any more — it needs to be a sustainable source of income. The government and its institutions need to ensure that we create a fair system for that.’
‘Amen,’ you smirk.
‘Now off you go,’ I beckon as I open the house door. ‘AI hasn’t automated your job yet!’