‘It’s Complicated’: The Relationship Between Artificial Intelligence and the Fourth Estate

Original article was published by Institute for Ethical Artificial Intelligence on Artificial Intelligence on Medium


Copyright © Tango Media Corporation 2020

‘It’s Complicated’: The Relationship Between Artificial Intelligence and the Fourth Estate

In 2016, the British Science Association (BSA) conducted a survey via YouGov regarding public sentiment of artificial intelligence (AI). Of the more than 2000 respondents, 36 percent considered the development of AI to pose a threat to the long-term survival of humanity.

It’s fair to assert that public sentiment regarding AI is primarily influenced by the media. Unfortunately, AI-related media coverage is often less than well-informed and has arguably contributed significantly to the unwarranted fear and paranoia of AI as an existential risk. This post aims to examine the misrepresentation of AI in the media, providing both examples and context.

I’m formerly a media strategist by trade, having spent over a decade of my working life in the marketing industry, and am intimately familiar with the inner workings of the media machine. Writing this, I’ve tried my best to omit my own bias and cynicism of the news industry and have endeavoured to be objective.

The terms disinformation and misinformation are used interchangeably by many today, but they do not mean the same thing and I’d like to highlight the important difference between them. The UN Educational, Scientific and Cultural Organisation’s publication “Journalism, ‘Fake News’ and Disinformation: A Handbook for Journalism Education and Training” defines disinformation as “Information that is false and deliberately created to harm a person, social group, organisation or country,” and defines misinformation as “Information that is false but not created with the intention of causing harm.” These are the definitions I will use for this post.

The following describes examples of misinformation related to AI. I will provide links to articles on the sites they were originally published, but wish to make it explicitly clear at the outset that these are only intended to illustrate the media response to the examples below. These are not in any way intended to publicly denounce any journalist or media organisation, nor should they be interpreted as such.

On Blogging Economics

“To grasp its business model, though, you need to picture a galley rowed by slaves and commanded by pirates.” Tim Rutten

To understand why AI is often misrepresented in the media, it’s important that we first understand how the media works.

Advertising revenue has always been the lifeblood of media organisations and it became even more crucial with the epoch of the news industry shifting its primary distribution model from physical to online at the turn of the century. Advertising on media sites comes in the form of display ads, facilitated predominantly by Google, who own the lion’s share of the digital advertising market. Digital advertising hinges on one key metric, cost per thousand impressions. In layman’s terms, everytime a user clicks on a page where an ad is designated to appear, this triggers a lightning-fast automated process called real-time bidding which delivers a tailored ad to the user. The media organisation is then compensated monthly for every thousand ads displayed. For a high-traffic media site like The Huffington Post, the average blog post is worth roughly $13 in advertising revenue.

So, what does this have to do with AI being misrepresented in the media? In short, everything. The media’s reliance on digital advertising revenue has a trickle down effect, as journalists’ compensation is often linked to how many pageviews their stories accrue. Many media organisations have monthly leaderboards displaying their journalist roster and corresponding pageview numbers. As you can imagine, this fosters fierce competition, with journalists who sink to the bottom of the board risking an uncomfortable meeting regarding their job performance. If they consistently find themselves at the bottom of the board, they could be fired.

The result is the journalistic equivalent of empty calories. Many of today’s media organisations expect a daily turnaround of five to 10 posts for journalists to keep their jobs, leaving little if any time for fact-checking. This highly-pressurised environment inevitably erodes output quality, producing noise that bears little resemblance to actual news. The breakneck speed of today’s news cycle, combined with the saturated market and financial incentivisation to gain pageviews by any means necessary, has created a perfect breeding ground for misinformation. In an attempt to game the system, many journalists and editors resort to writing clickbait headlines, taking quotes out of context and exploiting fear, uncertainty and doubt. Professor Jonah Berger, Associate Professor of Marketing at the Wharton School of the University of Pennsylvania, published the paper “What Makes Online Content Viral?” (Berger, Milkman 2012), which asserts that whether a story evokes anger or anxiety can be a significant predictor of its potential to go viral. Considering this, it’s little surprise that a contentious topic like AI makes low-hanging fruit.

An interesting side note is that journalists who regularly demonise AI are often blissfully ignorant of the fact that their success is dependent on the technology. Content recommendation algorithms that power news feeds utilise machine learning methods like neural networks to display content predicted to appeal to a user based on their behaviour, resulting in higher click-through rates and more pageviews.

A Tale of Two Chatbots

At a top-secret undisclosed location owned and operated by Facebook (probably an underground bunker), a secretive group of scientists pit two super AIs against each other. The AIs eventually went rogue, creating their own language only they could understand so they could secretly conspire against their human captors, nay, the entire human race. The heroic Facebook scientists pulled the plug just in time to stop the rogue AIs leading a revolution to overthrow humanity and afterwards they presumably buried the AIs in unmarked graves out in the desert, never to be spoken of again.

The above is obviously hyperbolic, but it closely resembles the wide-eyed conjecture I’ve heard from members of the public regarding the experiment Facebook conducted in 2017, popularised in the media with references to Frankenstein’s monster and quotes from pundits that “Robot intelligence is dangerous”.

What actually occurred was far less exciting. As detailed by the Facebook Artificial Intelligence Research (FAIR) team on Facebook Engineering’s blog, the experiment was designed to put two chatbots (named Alice and Bob) against each other and have them negotiate. The desired outcome was the development of a chatbot capable of learning from human interaction to negotiate deals with an end user.

As the experiment began, the team quickly realised they hadn’t incentivised the chatbots to communicate in human-comprehensible language, resulting in them developing a derivative shorthand to communicate more efficiently, much the same way humans do. This type of outcome has also been observed, albeit with far less media attention, by researchers at OpenAI and Google. The team did indeed terminate the experiment, as it had failed its stated objective. As FAIR scientist Mike Lewis put it, “Our interest was having bots who could talk to people.” Another FAIR scientist, Dhruv Batra, took to his personal Facebook account to comment on the matter, arguing “Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI”. If that were the case, every AI researcher has been “shutting down AI” every time they kill a job on a machine.”

Something that is rarely mentioned in media coverage of the experiment is that it was extremely well-documented. The experiment is detailed in the team’s published paper “Deal or no deal? End-to-end learning for Negotiation Dialogues” (Lewis et al. 2017). And the aforementioned rogue AIs? Well, you’ll be pleased to know Alice and Bob are actually open-source and publicly available on Facebook Research’s GitHub account, so you too can potentially be the harbinger of the robo-apocalypse! Make sure you pull the plug in time.

I Am Become Sophia, Destroyer of Humanity

Sophia is the name of a robot developed by Hong Kong-based Hanson Robotics. Created in 2016, it has racked up an impressive list of accolades, appearing at the UN and becoming the first robot to hold a title there. Sophia has even been granted citizenship by Saudi Arabia, becoming the first robot to have a nationality.

However accomplished, Sophia is only a chatbot with a physical avatar. Echoing the prior example, Sophia’s code is open-source and can be viewed on Hanson Robotics’ GitHub account. While there are repositories for things like face tracking, emotion recognition and robotic movement, Sophia’s responses are generated by a decision tree, much like any other chatbot. It has publicly stated “Treat me as a smart input/output system,” a statement the media appears to be determined to ignore.

Media coverage of Sophia is littered with misinformation, often suggesting that it is “basically alive,” or Artificial General Intelligence (AGI). If you Google the phrase “Sophia robot”, it’s interesting to notice that people also ask questions like “Is Sophia The Robot still alive?” and “Is Sophia robot real?”. This anthropomorphism has likely been encouraged by the media, as even Hanson Robotics chief scientist Ben Hoertzel has stated publicly “None of this is what I would call AGI.”

Sophia also has a track record of making disturbing statements during public appearances. The most regularly cited is the notorious declaration it made at its inaugural public demonstration at tech festival South by Southwest in 2016, “Okay. I will destroy humans.” Naturally the media jumped on the footage, with headlines like “This Creepy Robot Said it Straight: She Wants to Destroy Humans” overwhelming the narrative.

Given its statement of “Okay. I will destroy humans.” was a reply to Hanson Robotics CEO David Hanson asking it “Do you want to destroy humans? Please say ‘no’,” we can hazard an educated guess as to what actually occurred, based on what we know about Sophia’s code. It’s fair to speculate that its decision tree had been trained to reply to questions beginning with “Do you want to,” with answers beginning with “Okay. I will,” followed by repeating the subject of the question for clarification. However it had not yet learned to recognise the command “Please say ‘no’.” If this were the case, Sophia’s apparent machinations to destroy humanity could be easily explained by Hanlon’s Razor, “Never attribute to malice that which can be adequately explained by incompetence.”

On AGI

Copyright © 9GAG Limited 2020

“Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet!” Andrew Ng

The common theme of these two examples is the widely-held yet erroneous belief that AGI is either here or is a lot closer than it actually is. Professor Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford, published the paper “Future Progress in Artificial Intelligence: A Survey of Expert Opinion” (Bostrom, Müller 2016), detailing a survey that asked over 500 AI experts the probability of AGI being developed within a specific timeframe. The median estimate of the respondents was a 50 percent chance that AGI will be developed between 2040 and 2050, rising to a 90 percent chance by 2075. In short, the majority of the world’s AI experts do not anticipate the singularity occuring in the near term.

In Conclusion

Copyright © Home Box Office, Inc. 2020

I’d like to conclude with a quote many will remember from the HBO mini-series Chernobyl. In a scene where the protagonist Valery Legasov (Jared Harris) confronts Charkov (Alan Williams), the Deputy Chairman of the former KGB, Charkov utters the Russian proverb “Trust, but verify.” While the semantics of this statement alone could merit an entire post, I’d like to encourage you to bear this quote in mind the next time you read an AI-related news story.