4 Reasons Why Good Technology Goes Bad

Original article was published by Henry Dobson on Artificial Intelligence on Medium


4 Reasons Why Good Technology Goes Bad

And why the Responsible Technology Assessment can help counteract bad technology

Driven by their vision to change the world for the better, most entrepreneurs set out with not just good intensions, but the very best of intentions. Indeed, most entrepreneurs start their journey with noble aims and resolute purpose, wanting only the very best for themselves as well as others. But as the old saying goes: the road to hell is paved with good intentions.

The moral of this proverb isn’t that we are doomed to create chaos no matter how good our intentions are. The moral is that despite our best efforts to solve pressing social issues and environmental problems, we sometimes find ourselves doing more harm than good.

This is very much the case with modern technology and most of the digital applications that we use today. Many technology solutions are built with the greater good in mind. Yet over time, businesses scale and grow, some solutions change and morph in ways that see them capable of inflicting varying degrees of harm on people (often customers), eventually impacting and changing society in ways the founders never intended nor anticipated.

There are many reasons why this happens. Underlying most of them is the fact that some elements of technology are inherently problematic. Algorithms, for example, are biased by default, simply because their function is to make decisions and provide outputs based on the data being fed into them. To address the possible risks and harms associated with technology, the following four areas of concern are necessary to consider.

1. Bias & Discrimination

In the U.S., courts in states like New York or California use risk assessment algorithms to predict the likelihood of a defendant recommitting crimes once they are released from jail. This likelihood is also referred to as a defendant’s recidivism rate.

One such tool is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). The software was developed by a for-profit company and uses a set of scores derived from 137 questions to classify criminal defendants as either low or medium/high risk. Courts use the results from the algorithm to base their decisions on whether defendants awaiting trial should be released on bail or not.

An independent investigation uncovered some worrying data. By utilising data derived from a structurally racist system, the COMPAS algorithm produced outcomes which scored black defendants as being twice as likely to recommit than their white counterparts. As a result, black defendants were falsely classified as high/medium risk at double the rate of white defendants, who in turn are mislabeled as low risk twice as much.

Once a black defendant is categorised as high/medium risk, they are just as likely to reoffend as a white defendant and vice versa, the proportion of re-offenders is roughly the same across races within the individual risk categories. This means that COMPAS paradoxically proves race isn’t a good indicator to determine somebody’s recidivism rate.

Many other algorithms have since proven to be racially biased, like Google’s facial recognition algorithm that classifies images of people of colour as “gorillas”. Understanding how algorithms are biased is critical for operating a socially responsible technology business.

2. Invasion of Privacy

At this point, voice assistant devices have become commonplace in many households around the world. Sold with the promise of making life at home all the more efficient, convenient and comfortable, Amazon’s Alexa has become an integral part of over 100 million households in the U.S. alone. But what seemed like a promise of relief has quickly turned into a cause for alarm for some customers.

Some customers have described “Kafkaesque” situations in which their voice assistants started repeating commands over and over again which they had given days ago. Other devices were able to access supposedly confidential audio files that someone else’s device had recorded. Virtually every user can recall moments where its device turned on without being prompted. Isolated incidents or inherent flaws within the technology?

Moreover, whenever voice assistants are asked to perform a particular task, the voice recordings are processed by an AI algorithm which not only listens but also transcribes the recording. Amazon has admitted to these storing transcripts on its own servers. This means that Amazon is in possession of what is effectively a written copy of their users’ most private conversations and intimate moments at home.

Despite not always feeding back information to their servers, voice assistants are listening and recording everything you say — how else would they hear the prompt that activates their function? Amazon claims these recordings are only being listened to by humans in some instances for the purpose of improving Alexa’s services. There have been incidents however where recordings have shown up as evidence in court cases without explanation. Some experts have gone so far as to equate home assistants with a state of “constant surveillance” — a situation not dissimilar to George Orwell’s novel “1984”.

While Amazon surely (or rather hopefully) didn’t intend their products to have these consequences,, this is an excellent example of the inherent flaws within a technology’s design.

3. Declining Mental Health & Well-Being

Originally social media platforms like Facebook and Instagram (or MySpace back in the days) were designed to give people an online space to be themselves and connect to their friends and family no matter where they may be in the world. Facebook’s vision statement says that the company wants to empower people to build community and bring the world closer together. While this is a noble pursuit, the platform today has morphed into something far bigger and more powerful than what was originally intended.

Let’s think of our own social media presence. We generally tend to present a heavily embellished, edited version of ourselves online. Which makes sense, as our online network has expanded from just friends and family to acquaintances, cousins-twice-removed or, in the case of LinkedIn, even to our professional contacts.

Given the broad extent of our personal “social network” we naturally want to show our most confident, intelligent and best self. This doesn’t mean that our online personas are “made up”. It just means that we’re never giving people an accurate nor full account of our real, multifaceted self.

These “best” versions that we see of other people in our network can have some very serious effects on our self image and our self esteem. An unfortunate truth about social media is that it directly affects our mental health and is known to cause heightened levels of social anxiety, depression as well as suicide.

While there are undoubtedly many benefits of using social media, we need to reflect on and critically think about how these platforms affect our mental health and social wellbeing. Some warning signs are looming on the horizon already, as research suggests that the recent increase in anxiety and depression in adolescents can be partially attributed to excessive social media usage.

4. Social & Democratic Risks

Social media is also a major cause for concern around social and democratic risks. Every day we willingly upload and share information about our daily lives, personal thoughts, political views and, of course, what we had for breakfast. By providing companies with this information on a silver platter — for free — we’re giving them insights into who we are, what we like and how we think.

Facebook doesn’t just store our information and data; it analyses and formulates our data into psychological profiles. At best, these profiles are used to determine the content which we “like” the most, to keep us glued to our screens by “refreshing” our news feeds, and to customise the ads we see, all done with the aim of getting us to buy products we may or may not really need. Perhaps worst of all is the fact that social media can be used to actively disenfranchise voters and undermine democracy.

The Cambridge Analytica scandal is possibly one of the most distinct examples of how things can go very wrong if individuals and organisations with not-so-good intentions get their hands on our data. Using data scraped and bought from Facebook, the company allegedly manipulated people’s political state of mind by presenting highly tailored campaign material along with other messages all aimed at influencing how people voted in the 2016 U.S. Presidential election. More recently, it has come to light that the Trump campaign used psychological profiles to actively discourage people from voting. Trump’s strategists acquired the data of over 200 million U.S. citizens and categorized them into individual categories.

Voters who were more sympathetic to the democratic party but not necessarily core voters were dubbed “deterrents”. Trump’s chief data scientist later said that they “hope they don’t show up to vote”. And it worked. A report by Channel 4 revealed that in the key 16 swing states disproportional amounts of “deterrents” who voted in the previous election ended up not going to the polls in 2016. They were subject to highly targeted ads on Facebook and other platforms, in some instances the campaign released 6 million different versions of the same message.

This is perhaps one of the greatest social risks facing our world: the way social media can be used to diminish our trust in people, society and modern democracy.

These 4 along with many more…

The four areas of concern as mentioned above only touch upon some of the problems that we’re encountering today; there are many other areas of concern, like the future of autonomous warfare and the possibility of General Artificial Intelligence (GAI) — the point at which AI obtains human-like mentality, including the ability to sense and feel the world like humans do.

As mentioned at the beginning, most, if not all entrepreneurs start out with not just good but the very best of intentions, often with the aim to change the world for the better. This however can only be achieved if you keep the inherent risks and problems of your technology in mind, and if you act proactively to mitigate and minimise these risks for your own product and business.

Take the Responsible Technology Assessment today!

The Responsible Technology Assessment (RTA) is an anonymous ethics assessment designed for tech founders, entrepreneurs and technologists in general.

The purpose of the RTA is to raise awareness around the major social risks associated with technology applications and provides practical tips and advice for improving the ethics of your tech business.

The RTA focuses on the importance of business ethics and how ethics can assist managers and executives with running a socially responsible technology company. It also covers social and cultural understanding, technology design, and consequential risks (health, democratic and social risks).