Original article was published by Raffaella Aghemo on Artificial Intelligence on Medium
Artificial Intelligence and FINRA Report on securities market (PART THREE)
We have come to the examination of the Third Section of the Synthesis Report, “Artificial Intelligence in the Securities Industry”, aimed at giving clarity to the securities market, and greater regulatory control, to those responsible for using and supervising this new technology.
In recent years, there have been numerous incidents reported on Artificial Intelligence applications, fraudulent, harmful, discriminatory, or unfair, which have highlighted the issue and the need to establish initiatives or develop principles to promote the ethical use of AI.
There are a number of factors to consider in this sensitive area, ranging from risk management model, data governance, customer privacy and supervisory control systems, as well as information security, outsourcing and supplier management, records and workforce structure, which also involve and require careful regulatory analysis.
Let’s go through the various aspects listed above, one by one:
Model Risk Management.
There are many areas to consider:
– updating the validation of the model, for example, of Machine Learning, which takes into account its complexity. This includes reviewing input data (e.g., revision for potential bias/distortions), algorithms (e.g., revision for errors), any parameter (e.g., checking risk thresholds), and output (e.g., determining the explicability of the output);
– the need and opportunity for initial and ongoing testing, including tests that test unusual scenarios, such as unprecedented market conditions, and new datasets;
– the development of models in parallel, experimenting with new models to replace the previous ones, only when they have passed all the tests, all the way through;
– the planning of an inventory of all the Artificial Intelligence models, to evaluate their efficiency and risk impact;
– the development of benchmarking and monitoring models of the models engaged, both to avoid false negatives and to ensure proper functioning, even in the case of models that train themselves and evolve over time;
– the forecasting of control and supervision models, by personnel, of the ML model, as well as the verification of compatibility and compliance with regulatory and legal requirements.
There are three FINRA rules, which serve to guide companies and their associates in this area: among these, Rule 3110, Supervision, which states “Each member shall establish and maintain a system of supervision of the activities of each associated person that is reasonably designed to achieve compliance with applicable securities laws and regulations and FINRA applicable rules”.
With reference to the concept of explicability, highlighted above, it is intended to incorporate it as a key consideration in the risk model management process for IA-based applications. This may require application developers and users to provide a written summary of the key input factors and motivations attributed to the outputs. Models can then be independently tested by model validation teams or external parties. In addition, there is an appropriate opportunity to include human review thresholds to safeguard “autonomous” and gardrail models or controlled risk thresholds, such as minimum trading amounts, in trade orders.
The second factor, Data Governance, opens up another set of intervention needs in different areas:
– the examination of the data set, underlying a potential AI application, for possible embedded bias, e.g. during the testing process. “A more diverse AI community would be better equipped to anticipate, review and identify bias and involve affected communities;
– regular review of the legitimacy and authority of data sources, particularly where data is derived from an external source;
– the assessment of data integration in all business systems: although data may traditionally reside in silos in different parts of the organisation, companies are now creating central data lakes in order to ensure consistency in data use, to maintain adequate levels of access rights, and to create synergies in data use;
– the development of security systems to safeguard data, including customer data;
– the implementation and development of benchmarks relating to the effectiveness of data governance programmes.
Industry participants noted that one of the most critical steps in building an Artificial Intelligence application is to obtain and build the underlying database that is large enough, valid and current. On the other hand, incorporating data from many different sources can introduce greater risks if the data is not tested and validated.
The third factor to be analyzed is the privacy of customers, which involves proper and fair compliance with a number of regulations, such as those relating to the protection of financial information and customers, respecting:
– SEC S-P regulation, Privacy of consumer financial information and protection of personal information,
– SEC S-ID, The Red Flags Rule, which requires broker-dealer companies to develop and implement identity theft prevention programs, and
– NASD Notice to Members 05–49, Safeguarding Client Confidential Information, which requires companies to maintain policies and procedures that address the protection of client information and records and ensure that their procedures adequately follow and reflect technological change,
as well as constantly updating written policies and procedures for the use of customer data, or other information collected for AI-based applications.
AI-based customer support tools may involve the collection and use of personally identifiable information (PII) and biometric data, or focus on information such as customer or application use of the website, geolocation, or social media activity; other tools may even include recording written, voice or video communications with customers. While IA tools based on this type of information may offer companies information about customer behavior and preferences, they may also pose privacy concerns for customers if the information is not adequately safeguarded.
Supervisory Control Systems
Finally, it is necessary to evaluate the Supervisory Control Systems, in compliance with FINRA Rules 3110 and 3120, by creating:
– an interdisciplinary technology governance group to supervise the development, testing and implementation of AI-based applications;
– in-depth testing;
– a fallback or back-up plan; the definition of back-up plans in case of failure of an AI-based application (e.g. due to a technical failure or an unforeseen outage) can help to ensure that its function is carried out through an alternative process, in accordance with FINRA Rule 4370 on Business Continuity;
– a plan to verify the appropriate FINRA licences and registrations of personnel (e.g. FINRA Regulatory Notice 11–33 states that certain staff members performing back office functions must be qualified and registered as Professionals of Operations).
Additional considerations concern Cybersecurity, which should be considered as an essential component of the assessment, development and testing process of any IA-based application; or Outsourcing and supplier management: while companies seek to take advantage of the advantages offered by IA, many choose to outsource specific functions by purchasing, from suppliers, turnkey applications, such as systems for monitoring financial crime and trade surveillance, while not forgetting that outsourcing an activity or function to a third party does not relieve responsibility for compliance with all applicable securities laws and FINRA regulations; o Books and Records, in compliance with FINRA Rule 4510 (Books and Records Requirements); o finally, the impact on the Workforce Structure, due to the integration of AI/IV into business processes.
The Report opens for final discussions and comments, to be submitted by 31 August 2020, on how FINRA can develop rules that support the adoption of AI/IV applications in the securities sector, so as not to compromise investor protection and market integrity.
All Rights Reserved
Raffaella Aghemo, Lawyer