Digital Future for Europe response to the EU AI white paper
The EU’s AI white paper should be commended for its ambition, but it may severely limit innovation and investment in the D9 nations and Europe as a whole if poorly implemented
The EU’s AI white paper is going to have a significant impact on the use and growth of AI in the D9 countries and around Europe. Now the white paper is open for consultation, we have some specific concerns that we want to share with you.
The D9 nations, Europe’s digital frontrunners havea strong and welcoming business environment, with an open trade-based economy. These nations are regularly ranked highest in Europe for the quality of their business environment, digitisation, and high levels of open data.
These are essential elements for encouraging a healthy tech ecosystem, and also a healthy economy.
How the EU and national governments decide to regulate AI and the use of data will have huge knock-on effects throughout the traditional economy, not just the tech sector.
High standards of trust
We want to see AI develop which the public can trust. This means AI which complies with GDPR, and respects our citizens’ concerns about privacy, and addresses worries about the way in which AI will affect the job market.
We also see AI as an opportunity. It is encouraging to see the European Commission President describe herself as a ‘tech optimist’, and we hope this attitude is embraced fully by European legislators.
The white paper’s definition of high risk AI is too broad
The EU wants to take a strict approach to risk. The white paper states that it will grade various AI applications as low risk up to the highest risk ones which should be banned outright.
The white paper takes a sectoral approach to risk. This means that any AI application working in certain sectors deemed high risk would be required to meet mandatory assessments. The paper considers sectors like healthcare, transport, and legal matters potentially high risk, and it also hints that it may deem AI technologies working in recruitment high risk as well, due to the potential for discrimination.
The other is that high-risk AI are often used for activities which are already high-risk. Autonomous vehicles is a perfect example of this, autonomous vehicles are high-risk and can cause accidents, however people driving is also quite “high-risk”. The important thing is to maximise safety overall, not to safeguard against AI caused accidents specifically. Therefore regulation around high-risk applications should be reviewed on a case-by-case basis or by comparing them to already existing risks.
An onerous set of commitments
For AI technologies deemed high risk, the Commission recommends a mandatory risk assessment process which ensures that all AI technology meets the following requirements:
Training data; data and record-keeping; information to be provided; robustness and accuracy; human oversight; specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification.
The demands for training data are an unnecessary burden for companies. Companies want and need data already. Requiring them to have the right sort of data misses the point. It is far better to focus on opening up public data sets, spearhead data sharing between nations and around Europe, and legislate for data interoperability than to demand certain standards of data from companies and scientists.
This risk assessment process would stifle many AI innovations before they have developed.
For example, the stipulations regarding human oversight and robustness are a burden on AI innovators, which are trying to regulate AI before it has been used. The best way to ensure robustness and accuracy is to allow AI applications to operate in the real world, so problems can be identified and addressed.
The insistence on human oversight defeats the point of many types of artificial intelligence, which are about reducing the need for human oversight of many tasks. Embedding human oversight in the design phase of AI applications — as suggested — would be damaging to the development of innovative AI technologies.
Ex ante legislation will hinder innovation in Europe
By introducing ex ante regulations — regulations which try to prevent future problems rather than solve existing problems — the EU threatens to skew the whole development of European AI. We advocate taking a learning approach to AI, which recognises that developing reliable, safe, and trustworthy AI has to be a trial and error process.
What makes AI exciting is that most of the gains we will enjoy are not yet known. They will come about from innovation, and finding out how new technologies work in practice and how they can be adapted to new data.
In order to make this approach as successful as possible, access to data must be prioritised above all else. AI only knows the data it is given. Algorithmic bias is a problem that we all want to tackle. But this bias is not a fault of the technology or the people behind it, it is a result of poor data. With access to greater data, which represents the diversity of our society, it will be far easier to build AI applications which reflect and understand society, and steadily reduce bias in AI.
Make greater use of voluntary labelling for AI applications
The white paper suggests a voluntary system of labelling for lower-risk AI applications. This would be available to AI applications which are not high risk but wish to signal their trustworthiness to their customers and users.
This is similar to the Danish Seal for ethical AI, and it could be replicated across Europe if necessary. It would be an important signal to consumers that AI companies are willingly making progress to win public trust and to empower citizens and customers.
Ideally, this system would be embedded in an overarching learning approach to AI, where all AI applications are given the opportunity to develop using real world evidence and attain public trust transparently, without hard regulations.
How have other groups responded?
A number of groups in our network and other expert organisations have aired their concerns about the white paper and the EU’s approach to regulation.
Ex-ante conformity assessments will make it more expensive and time-consuming for new AI technologies. There’s a worry that it will require companies to disclose a lot of intellectual property, which could be considered a risk which will add to the disincentive from starting an AI business in the EU or launching in the EU market. (Centre for Data Innovation)
The EU currently doesn’t have any AI assessment centres and is currently facing a skills shortage in AI and IT. So it is unclear what the testing procedures and facilities capabilities are likely to be. The EU wants this testing to be done by member states but it is unlikely that member states will acquire the necessary expertise either, therefore meaning they will likely outsource testing to international companies. This increases the intellectual property risks mentioned above. (Project Disco)
As the Centre for Data Innovation says, a broad definition of high-risk and low-risk AI jeopardises the development of AI in these sectors. There are two arguments to be made here. The first is the one is that there are plenty of low-risk AI applications which will be involved in high-risk sectors, they bring up the example of chatbots being used in health.
AI is a trial and error process, it needs a learning approach
In December 2019, the Netherlands outlined its position on how best to regulate AI in Europe. The Dutch support a human-centric approach to AI, as does the EU, but it advocates above all a ‘learning approach’ to AI regulation. This approach understands that AI is a trial and error process. It needs to be developed and honed in the real world, with real-world evidence. Above all, regulators should use soft laws rather than hard legislation to oversee this, recognising that AI moves quicker than regulators, and should be governed by industry standards rather that statutory instruments.
Benedikt Blomeyer from Allied for Startups makes the case that if data sharing is compulsory, then gathering data is a less attractive prospect. Data is valuable for companies, it is the lifeblood of AI. We support open data initiatives for publicly-owned datasets, but companies developing their own AI and their own datasets must be allowed control over their data for a given time, similarly to intellectual property rights in pharmaceuticals.
The EU’s proposals on data sovereignty have concerned many. German MEP Damian Boeselager, an entrepreneur himself, asked the EU Commission why it chosen “sovereignty over competition”, stating that data sovereignty will limit innovation, and will benefit lawyers instead of startups.
Prioritising EU-only datasets which comply with European rules poses the following challenges to startups:
- It may limit the data available to EU startups
- Is a barrier for companies seeking to launch in the EU market, if they have to retrain and adapt to new data rules
- Datasets will be less diverse (beyond race, habits, institutions, culture and means of collection will change things) and smaller
- It challenges existing global dataset and cloud providers like Google, Amazon and Microsoft, all of whom store their data in accordance with GDPR and EU rules.
Good tech policy is good growth policy
AI is the next era-defining, pervasive technology. No part of the economy will be untouched by its developments. We must understand that our future growth and prosperity depends on harnessing the benefits of AI properly.
For a set of clear, coherent, and practical policy recommendations for supporting Denmark and Europe’s AI sector, please look at the Digital Future for Europe manifesto, available here.
This has been developed by a coalition of startups and associations across Europe, and highlights what Europe’s innovators truly need to succeed in the age of AI.
Supporting AI should be at the heart of Europe’s digital future
As more European countries set up their own AI strategies, and lay out their visions for AI investment, we want the digital frontrunners to lead Europe with an innovative, evidence-based, and ambitious approach.
- Encourage responsible adoption first, and regulation second. AI is an opportunity first and foremost. It unlocks huge opportunities to transform our society and economy for the good. Europe must not rush to regulation, it should embrace what works, work to demystify and explain AI to the public, and trust its entrepreneurs and startups.
- Make sure regulations are simple, flexible, and based on sound principles. AI technology is developing faster than any law can predict. Regulators should take a soft approach to AI regulation, prioritising adoption over precaution. These laws should be flexible, rather than prescriptive, and principles-based, rather than strictly precautionary. It is only through allowing innovation and experimentation that we will build the real-world evidence to develop the most effective regulations.
- Address ethical concerns with more, better data. AI only knows the data it is given. Algorithmic bias is a problem that we all want to tackle. But this bias is not a fault of the technology or the people behind it, it is a result of poor and unrepresentative data. With access to greater data, which better represents the diversity of our society, it will be far easier to build AI applications which reflect and understand society, and steadily reduce bias in AI.
The digital frontrunner nations are uniquely placed to lead the way, and show Europe how a ‘fast lane’ for AI can work in practice, and how heavy-handed regulations will significantly set Europe back as we journey through the age of AI.
- Open up public sector datasets in the D9 countries to automate government processes. Learn from each others’ work and aim to invest in automation in the public sector, in a range of priority areas, such as transport, clean energy, education and healthcare.
- Introduce the free movement of data and data interoperability between D9 states. The D9 can and should be a trial area for new innovations with data. D9 nations Estonia and Finland are leading the way with the unification of their government data, and the group should follow. This will allow AI companies to expand across Europe, and pull down electronic barriers within the Single Market.
- Lobby the EU to conduct an AI ‘refit’ of existing and future legislation. The EU has a ‘refit programme’ to improve its legislation, and make it more efficient and effective, particularly for small businesses. An AI refit would amend laws like GDPR, and help put the needs of SMEs and startups at the heart of EU policy.