Shaping EU regulations on artificial intelligence: the five improvements – JD Supra

Original article can be found here (source): artificial intelligence

Shaping EU regulations on artificial intelligence: the five improvements

Dentons

In our previous “bite” (see here) we addressed some requirements for the future EU regulatory framework on artificial intelligence (AI), as laid out by the European Commission in its White Paper on Artificial Intelligence. Such requirements include training data, data and record-keeping, information to be provided, robustness and accuracy, human oversight.

With this bite we will address the main five adjustments to the current EU regulatory framework that, according to the Commission, may be pursued in order to create a “trustworthy” European AI ecosystem, capable to compete also on a global basis.

The Commission highlighted that, in order to be effective, future regulations on AI should address all potential safety risks, which may not be adequately covered under the existing liability regimes.

Furthermore, all players within the AI value chain should be taken into account, from developers and deployers (i.e. the persons/companies using the AI product or services), to all the other players involved (e.g. importers, distributors, software and services providers, etc.). The regulations should in fact be addressed to those who are best placed to address any potential risks; for instance, privacy by design provisions should be addressed to developers, whilst certain usage limitations will be specific to deployers.

Considering the above, the Commission identified the following improvements to be considered for the AI regulatory framework:

  1. Clarifications on liability – Besides the Product Safety Directive, liability is currently subject to different national legislations, thus causing uncertainty and market fragmentation, which obstructs companies reaching the optimum scale for development. A common liability regime, beyond product safety, would allow more certainty as to the role of the players involved throughout the AI lifecycle, allowing also effective enforcement.
  2. Services safety – The EU product safety legislation could be extended beyond the placing of the products in the market, covering also a wider array of services (including services based on AI). This will allow clarification on the liability for standalone software, which is used for the creation of “AI-powered products”.
  3. AI evolution – AI systems, are per se a “dynamic concept”, as such systems can learn and evolve through upgrades. Regulations should adequately address such evolution, also addressing product / functionality changes through software updates. This will increase security, as it will also address risks that were not present when the product was launched into the market.
  4. Clearer roles – The role and responsibilities of each player involved in the AI supply chain should be clearly identified. This is particularly important given the wide array of players involved – the liability of some of them (e.g. producers) is currently addressed by EU legislation, while others are left to national regulations, thus creating uncertainty and fragmentation.
  5. Broader safety – The EU definition of “safety” should also be reconsidered to cover the “new” risks, including, for instance, cybersecurity and loss of data or connectivity risks, as well as other more “futuristic” risks, including mental safety for those who work with humanoids, etc.

That said, as specifically stated by the Commission, any future regulatory framework should not be overly prescriptive, but rather focus on the most “high-risk” AI applications. In certain instances, a prior conformity assessment will be required, with specific procedures for testing, inspection and, where applicable, certification (which potentially can also be achieved in other non-EU jurisdictions, on a mutual recognition basis). Considering the dynamic and evolving nature of AI systems, repeated assessments may also be required over the AI system lifecycle, including retraining obligations, and adequate documentation should be set out to allow ex-post controls and continuous market surveillance. Low-risk AI systems may in addition benefit from prior assessment or certification on a voluntary basis.

The White Paper on Artificial Intelligence is still subject to consultation. We will further update you on any developments, and, in the meantime, let us know if you require any additional clarification and don’t forget to sign up for our bites!