Recent draft ICO guide on Artificial Intelligence

Source: Deep Learning on Medium

Recent draft ICO guide on Artificial Intelligence

The phenomenon of Artificial Intelligence is undoubtedly growing: its use “automates” a series of activities, before, unique prerogative of the human arm and mind.

What we must turn our attention to is not the fear of an immediate replacement of the human being with the thinking machine, but conscious and responsible use of the data that are used for its operation and for its interactions.

We have already analyzed the interaction with Artificial Intelligence systems, in the work interviews, regulated in the state of Illinois (you can read my in-depth here), as well as a first “regulator” and clarifier intervention made by the Norwegian Authority for Data Protection, on the main tensions of compliance with the GDPR (my other in-depth here).

Clearly, no State is exempt from an approach of analysis and study of the phenomenon, which together with other technological innovations, represents the future in every sector, from the social to the economic.

The most recent speech comes from the meeting of the UK Information Commissioner’s Office (“ICO”) with the Alan Turing Institute, which led to a three-part consultation to explain the decisions taken by artificial intelligence, entitled “Project ExplAIn”. This initiative, born in mid-2018, and due to the priority commitment in the development of AI, within the United Kingdom, is not a corollary of the Data Protection Act but is only a guide on best practices to be used, to explain the processing of data to those involved in decision-making systems with AI.

The first part is aimed at professionals in the personal data sector (DPO), i.e. those who are directly involved in the development of the phenomenon and who must be able to ensure proper privacy and compliance, imprinting everything on the concept of explainability.

The second part focuses essentially on practical and empirical aspects, to explain the functioning of Artificial Intelligence technologies to individuals.

The third part aims to create a guide on the roles, processes, and documentation that each company must prepare in order to be exhaustive in providing answers to stakeholders.

Both the GDPR and the Data Protection Act already include specific provisions on the profiling, automated processing and automated, large-scale decision-making of personal data, thus also regulating the use of the AI, so that it respects the principles of fairness, transparency, and accountability, as set out in the GDPR.

The Article 13, paragraph 2, point f) of the GDPR explicitly requires proactive communication by the data controller to the data subject in relation to: “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”.

The Article 15, paragraph 1, point h) also states: “The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data and the following information” (the same as above).

In the document, the ICO lists a series of principles to help explain the situation, exactly six, which are listed below:

Rationale explanation: the reasons that led to a decision, explained in a simple and non-technical way, that also helps companies to comply with the GDPR.

Responsibility explanation: who is involved in the development, management and implementation of an artificial intelligence system and who, if any, to contact for a “human” review of the decision.

Data explanation: what data was used in a particular decision and how; what data was used to train and test the AI model and how.

Fairness explanation: the measures taken in the design and implementation of an AI system, to ensure that its decisions are generally impartial and fair, and whether or not a person has been treated fairly.

Safety and performance explanation: the phases of the design and implementation of an artificial intelligence system to maximize the accuracy, reliability, safety and soundness of its decisions and behavior.

Impact explanation: the impact that the use of an AI system and its decisions has or can have on an individual and society in general.

The ICO does not stop there and also sets out four guiding principles for companies:

I. To be transparent: to make the technical logic underlying the output of the model comprehensible and to provide simple motivations that the individuals concerned can easily evaluate;

II. Be responsible: consider responsibility at every stage of the design and implementation of the IA system and whether the design and implementation processes have been made traceable and verifiable for the whole project;

III. Consider the context in which the company operates;

IV. Reflect on the impact of the company’s AI on the individuals concerned and on society in general, for example, if the model has been designed, verified and validated to ensure its safety, accuracy, reliability, and robustness.

In the light of these “recommendations”, data controllers have the burden of ensuring that any system of Artificial Intelligence, whose development has been entrusted to external bodies, is properly explainable, and if there are algorithmic techniques “opaque”, inaccessible to human understanding, companies will have the obligation to provide for additional measures of risk prevention, and that any type of “anomaly” can be predicted, analyzed, evaluated and mitigated (perhaps through techniques that may include decision trees or lists of rules, linear regression, reasoning based on specific cases or logistical regression). Here the document goes into the merits of more purely technical use cases, such as the case of black-box AI systems, including neural networks and random forests, which should only be used if their potential impacts and risks have been carefully considered; and makes a distinction between the explanation of individual results of an AI model (defined as “local explanation”) and that of how it works through all its results (so-called “global explanation”), and between intrinsic and extrinsic explanations, depending on whether decision-making processes are decomposable into several phases or instead are completely inaccessible to internal logic due to their complexity, such as black-box systems.

Finally, the ICO warns that companies that fail to explain decisions, assisted by artificial intelligence, may face regulatory action, reputational damage and public disengagement.

In the Appendix to the draft Guide, whose answers are due shortly, no later than 24 January 2020, the ICO points out some key steps for the technical staff, for a proper implementation of the automated system; they can be summarized in two types of behavior: explanations of AI systems, which demonstrate that the best governance practices have been followed during design and use; explanations that clarify the results, in simple language, as well as the reasons behind the usefulness of a human judgment, “increased” by the output of an Artificial Intelligence system.

All Rights Reserved

Raffaella Aghemo, Lawyer