Market concentration, democracy and artificial intelligence…a call to policymakers

Original article was published by Joanne E. Gray on Artificial Intelligence on Medium


Market concentration, democracy and artificial intelligence…a call to policymakers

Photo by Markus Spiske on Unsplash

The power of big tech

The dominance of the digital environment by a handful of powerful multinational corporations is a familiar issue for most policymakers.

In Australia, the ACCC has investigated the impact of digital platforms on competition in media and advertising; the European Commission has fined Google EUR 4.34 billion for breaching the region’s antitrust rules; and in the United States, the leaders of ‘big tech’ were called before Congress to discuss their predatory behaviour.

The technology companies that dominate in the digital age are powerful actors in contemporary society and we are right to question their reach and influence. They create, own and manage global technological systems; they produce and regulate private marketplaces; and they influence cultural and political conditions when they moderate the flow of information in society.

In our digitally networked society, immense social, economic and political power is concentrated around a selection of powerful private technology companies.

Maintaining democratic social and political systems requires confronting this concentration of power as well as, increasingly, the use of AI-enabled technologies.

Democracy and automation

Automated systems that use machine learning models, for example deep neural networks, are increasingly deployed to regulate the digital environment. They are used to personalise products and services, to recommend content, to target advertising, and to enforce laws.

Automated systems are particularly beneficial to internet-based firms because of the scale at which they tend to operate. Machines can generate insights and make decisions infinitely more quickly than can humans.

Automated decision-making, however, is also notoriously difficult to interrogate and hold to account. When it comes to machine learning based regulatory systems, we cannot simply read some code to understand the decision-making process. Decision-making is difficult to understand when a machine identifies patterns and makes decisions in a ‘black box’.

When big tech rules

When policymakers turn to private companies to regulate the digital environment, we further entrench their power.

For example, when we ask Google or Facebook to stop the spread of harmful content online, we ask these platforms to be gatekeepers of information, to protect us from harm and to adjudicate critical social and cultural disputes.

In effect, we charge private companies with the task of regulating vast public spaces and immense pools of public information.

And because of the size of that task, platforms inevitably seek to automate.

Of course, the size of the task is partly why we outsource internet governance to platforms in the first place — democratic legal and political institutions find it hard to make decisions at the speed and scale demanded by the internet.

So what we have is a situation where private companies regulate digital spaces, typically by deploying algorithmic systems and employing large workforces from developing nations.

We outsource and, as much as they can, they automate.

This is a problem for democratic governance because, in effect, we are creating a situation in which powerful private companies use substantially opaque and unaccountable systems to regulate the digital environment.

As well, when private companies undertake the difficult task of regulating the internet, they will always do it with their own commercial interests in mind.

Platforms like Facebook and Google operate for a profit and are accountable to shareholders. Commercial imperatives can motivate companies to create systems that are efficient, but they may also create systems that prioritise efficiency over accuracy, and bias commercial interests over the public interest.

In short, as AI technologies advance in the context of concentrated private power in the digital environment, we are at risk of creating a future where private companies use partially unknowable technologies to make decisions that impact all areas of our lives: in our homes, at work, in healthcare, education, finance, transport and so on.

For this reason, policymakers should view AI not simply as a technical or industrial issue but also an institutional one.

The task at hand: A short list of policy objectives for protecting democratic societies in an increasingly automated world

  1. We need to mandate that the public interest and other ethical principles are encoded into socio-technical systems alongside or in place of private commercial interests. While key components of machine learning occur in a black box, AI is not neutral or apolitical. Real actors design and deploy automated systems, using real data about real people and events. Currently, private companies are largely left to determine how these systems are designed and in whose interest they operate. This needs to change.
  2. We need to create public institutions that can hold powerful private decision makers to account. At the same time, we must ensure that we do not outsource too much regulation to powerful private companies. Democratic public institutions must find a way to stay in control in the face of dynamic and rapidly changing technological conditions.
  3. Where private actors undertake public regulatory responsibilities, we need to regulate the use of automated technologies to ensure transparent, accountable and fair rule-making and enforcement.
  4. We need to diminish the concentrated private power enjoyed by Big Tech. This will likely require new data policy interventions because a monopolisation of data underpins the current distribution of power in the digital environment. We need to implement effective policies for addressing Big Tech’s data advantage.

Easy, right?