Original article was published on Artificial Intelligence on Medium
The missing piece for Artificial Human Intelligence [2/6]
Current State of AI Regulations
In the last episode, I wrote about why AI regulation was such a hot topic pre-corona and why I think we should start booting the debate again.
In this part, I like to go deeper into the current state of AI regulation and present a framework which illustrates the influence and dependency of different factors of AI and serve as the basis to search for interdependencies and solutions.
Regulation of AI for safety reasons is nothing new, but the importance of such regulation has recently attracted political attention. Especially, the EU is very active and is currently working on EU guidelines for AI ethics, but corona has slowed down the efforts. This is why the General Data Protection Regulation (GDPR) is still the only binding legislative regulation that touches AI directly in the EU (there are some other privacy and security laws like CCPA, NIST, IS 27701 etc.).
That’s why I’ll use the GDPR it in the following to check the feasibility of AI regulation.
The GDPR has unified the process of data protection throughout the EU and mainly addresses rules for the processing of personal data by private companies and public bodies — in other words, what applies to Germany now also applies to France, Italy, Spain, etc.
On the one hand, this is intended to ensure the protection of personal data within the EU; on the other hand, this aims to ease the free movement of data within the European internal market. One may forget that data protection, in general, is a human right (as noted in Article 8 of the Charter of Fundamental Rights of the EU) and is a protection to all forms of arbitrariness. But although data protection is a human right, it does not mean that it is universally applicable — it cannot restrict other fundamental rights. The principles written down in GDPR are primarily based on the processing of personal data and their prosecution.
The connection between AI and GDPR is rather complicated. Firstly, the use of AI-based applications must correspond to the general principles of legal data processing, according to Article 5, Paragraph 1 of GDPR. These rights range from processing and data minimization to the possible anonymization of personal data, the limitation of storage, and the protection of integrity and privacy. But AI techniques like deep learning and machine learning are generally based on the analysis of a massive amount of data. Anonymization, limitation of storage etc. have a massive impact on how AI must be developed.
Article 22 of GDPR formulates the right “not to be subject to a decision based exclusively on automated processing — including profiling,” which “has legal effect vis-à-vis the persons concerned or significantly impairs them in a similar way.”. In other words, this article states that everyone has the right to be excluded from automated decisions. This could also become a debate about a possible corona tracking app, where people have the right to know how their data is being used and what is being tracked, at least when the data is used for decision making that affects human rights.
For example, an algorithm alone cannot decide whether a person gets a loan or not — the bank itself must be able to explain why credit was denied. In addition, GDPR strengthens the right of data subjects to information. Recital 71 in GDPR highlights transparency and accountability as follows: “[T]he data subject should have] the right … to obtain an explanation of the decision reached”.
What does this mean? Data subjects not only have the right to know what aspects of their personal data has been processed and how they also have the right to receive meaningful information about the logic involved. As well as the scope and intended effects of such processing that is made by automated decision-making.
This raises an important question for AI companies: how these requirements can be fulfilled in an understandable manner for the data subject and how can be transparency requirements and information duties of the GDPR satisfied without disclosing valuable proprietary information (such as algorithms)?
GDPR also prohibits discrimination (GDPR Article 9). This means that AI must guarantee to exclude any discriminations which could be caused by the absence of data, for example, in the case of image recognition. But how is that even possible when the black box problem is still unsolved?
At the same time, it should be considered that humans are also black boxes to a certain extent or do you always know what is on the mind of every person you are dealing with? There are countless examples of discrimination based on gender, ethnicity, class, and religion which get manifested either unconsciously or through behavioural economics through humans.
I’m curious to see how the upcoming European AI Rulebook will complement the GDPR in their shortcomings.
To understand the dynamics of the various factors as well as their influence on AI regulations in a better way, I created the SEPITEL (society, environment, politics, industry, technical, ethical, and legal) framework:
Figure 1: SEPITEL framework (light version) of the dynamics of AI
The framework is divided into macro and microdynamics. The macro-level shows society, politics, the industry, and the environment. The black circular arrows represent the influence that the macro areas have upon each other.
The impact and dynamics of AI in different areas of society, politics, the industry, and the environment significantly influence the relevance of associated regulations. So, all forces have an influence on every other force. Let me elaborate on that.
AI can be seen to have some passive impact through its influence on the citizens or the economy of a country, but it also directly impacts the associated laws and regulations. A frequently cited example of the social impact of AI is the question of replacement of jobs by machines and the associated employment loss. Already in many industrial sectors, machines use AI to perform work faster, more reliably, and more cost-effectively than humans. A trend which will continue to impact the labour market in the foreseeable future. Time will tell whether a large number of human beings will simply be replaced by AI machines or whether as many new jobs will be created as those which are replaced by AI. But it is clear that AI does have a considerable influence on the job market today and will have even a bigger impact in the future.
Another major influence on the politics governing the AI industry is the power shift in recent years, particularly the size of companies which both use and offer AI. Tech companies such as Google, Amazon, Facebook, and Apple (collectively referred to as GAFA) are not only the first to be valued at a trillion dollars, but they have also created internet platforms with more than a billion user accounts around the world. 
These companies wield enormous power in many ways. In corona-times the benefit of social networks, internet-communication and remote work was never been bigger than before, helping us to live in a new normal world with social distancing. But during the US presidential elections in 2016, there was a big controversy about the bad influence of some of the tech giants. In a US court, tech giants Google, Facebook, and Twitter were accused of not doing enough to protect their platforms from being taken advantage of by Russian trolls, who tried to manipulate the US presidential elections with fear and hatred. In response, the defendants stated that they were indeed aware of the troll-generated campaigns by Russians, but that such actions were difficult to link directly with the presidential election as their hate campaigns did not include the candidate’s name but only addressed issues such as refugees, weapons laws, etc. But not only elections or referendums can be manipulated by AI algorithms. In Myanmar, more than 30 million people use Facebook every day. Members of Rohingya, a minority community in Myanmar, were the targets of a rapidly growing number of hate comments on Facebook, leading to increased attacks on them in real life. One reason for the increase in hate comments was because of Facebook’s “filter bubble” which filters what it considers the most relevant information for users and suppresses information that does not correspond to the user’s point of view or opinion. This filter bubble has fueled the concentration of the aversion of the Rohingya people on social media more strongly.
GAFA, which are representative of AI development, do not only have a massive impact on politics and society directly, but they are also important economically in the fields of research and education. In 2016, tech giants like Baidu and Google spent between $20B to $30B on AI research and development and invested in start-ups. In recent years, numerous promising European AI companies have been bought by bigger American and Chinese companies. One example is the British company DeepMind, which developed a type of AI which could beat the world’s best player in the complex game of Go. DeepMind was bought by Google for around $500m. Apart from start-ups, even top-class researchers from Europe are being enticed to move to companies like Google. This “brain drain” is leading to research and teaching deficits in European institutes and companies, thereby just increasing the AI research gap. In terms of investment Japanese private bank, Softbank, announced they were to raise a second $100 billion fund to invest in AI.
And China? China not only wants to become the global leader in AI, but it also plans to use AI to control its citizens through a so-called social credit system which will be fully implemented by the end of 2020. This social credit system is an online rating or scoring system, which accesses various databases, such as the creditworthiness, criminal records, and the social and political behaviour of individuals, companies, or other organizations such as NGOs, to determine how much they contribute to the Chinese nation. The official aim of the social credit system is to educate Chinese society to be more “sincere” in terms of social behaviour through comprehensive monitoring. What do you think about that? Does the Chinese government believe in humans that are fundamentally good or bad or something in between?
Or course, the above-mentioned examples highlight the rather negative influence of AI. But AI does not only have a bad influence on the world; it could be used to solve the most difficult problems like climate change, biodiversity and conservation, healthy oceans, water security, clean air, or weather and disaster resilience, or even fasten the development of a vaccine against COVID-19.
To underline the SEPITEL influences once more, here are some stats, I have copy-pasted from the latest AI Index Report:
In the US, the share of jobs in AI-related topics increased from 0.26% of total jobs posted in 2010 to 1.32% in October 2019.
In a year and a half, the time required to train a large image classification system on the cloud infrastructure fell from about three hours in October 2017 to about 88 seconds in July 2019. During the same period, the cost to train such a system fell similarly.
At the graduate level, AI has rapidly become the most popular specialization among computer science PhD students in North America, with over twice as many students as the second most popular specialization (security/information assurance).
There is a significant increase in AI-related legislation in congressional records, committee reports, and legislative transcripts around the world.
Research and Development
China now publishes as many AI journal and conference papers per year as Europe.
Fairness, interpretability, and explainability are identified as the most frequently mentioned ethical challenges across 59 ethical AI principle documents.”
Summing up, the industry uses and develops AI having an impact on society (i.e. usage, income, outcome etc.), politics (it has to regulate industries and be pushed by the industry), and the environment through the development of AI applications (e.g. increase in energy consumption of cloud usage, but keeping in mind climate change in the industry). Political decisions influence society, the industry, and the environment, for example, through laws concerning AI. The society uses AI and specifies how it might be adapted, or it is decided through political implications that come with elections. The environment is also influenced by all factors, but it also influences all other forces (think again of global warming).
One thing is certain, the development of AI regulations for the good is a complex matter and must be considered from different points of view. Therefore, I will go one level deeper in the next episode and analyze the micro-dynamics and factors of AI regulations in more detail. I hope you have enjoyed reading this episode. I’m always curious about your feedback.
 European Union. (2018, September 22). Retrieved from https://eur-lex.europa.eu/legal-content/DE/TXT/PDF/?uri=CELEX:32016R0679&from=DE
 Eu-datenschutz-grundverordnung.net. (2018, September 22). Retrieved from https://eu-datenschutz-grundverordnung.net/eu-dsgvo/
 PrivazyPlan. (2018, September 22). Retrieved from http://www.privacy-regulation.eu/en/recital-71-GDPR.htm
 European Union. (2018, September 22). Retrieved from https://eur-lex.europa.eu/legal-content/DE/TXT/PDF/?uri=CELEX:32016R0679&from=DE
 Heß, M. (2018, September 22). Retrieved from https://www.fonial.de/blog/artikel/lesen/kuenstliche-intelligenz-und-dsgvo-394/
 The black box problem says it is difficult or even impossible for humans to understand where the output/decisions of AI comes from as the internal behavior of the code is unknown.
 Mourdoukoutas, P. (2018, September 17). Retrieved from https://www.forbes.com/sites/panosmourdoukoutas/2018/09/16/apple-is-worth-one-trillion-dollars-amazon-isnt/#171aabd125f5
 Statista. (2018, September 19). Retrieved from https://de.statista.com/themen/1842/soziale-netzwerke/
 „An individual who posts false accusations or inflammatory remarks on social media to promote a cause or to harass someone. The anonymity of such venues enables people to say things they would not say in person and they often like to ratchet up emotions to generate strong reactions“ https://www.pcmag.com/encyclopedia/term/68609/internet-troll
 Bufithis, G. (2018, Septemberg 19). Retrieved from http://www.gregorybufithis.com/2017/10/31/the-unstoppable-power-of-gafa-google-apple-facebook-amazon-a-new-series/
 McKinsey. (2018, Septemberg 19). Retrieved from https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx
 Su, C. (2018, September 19). Retrieved from https://techcrunch.com/2014/01/26/google-deepmind/
 Shead, S. (2018, September 19). www.forbes.com. Retrieved from https://www.forbes.com/sites/samshead/2018/08/06/softbank-billionaire-devoting-97-of-time-and-brain-to-ai/#799db6fa537d
 Kleinz, T. (2018, September 20). Retrieved from https://www.heise.de/newsticker/meldung/34C3-China-Die-maschinenlesbare-Bevoelkerung-3928422.html
 Creemeres, R. (2018, September 20). Retrieved from https://chinacopyrightandmedia.wordpress.com/2016/05/30/state-council-guiding-opinions-concerning-establishing-and-perfecting-incentives-for-promise-keeping-and-joint-punishment-systems-for-trust-breaking-and-accelerating-the-construction-of-social-sincer