A is for Accountability — oversight in the age of artificial intelligence

Source: Artificial Intelligence on Medium

A is for Accountability — oversight in the age of artificial intelligence

Artificial Intelligence (AI) is becoming an important phenomenon in our ever more digitalized life. AI solutions are tools used for decision-making in areas ranging from financial services to criminal justice, from cybersecurity to investigations on irregularities. But at the basis of all AI solutions used rests the human factor, not only technically but also regarding integrity and bias, in short: ethics. Taka Ariga, Chief Data Scientist and Director of Innovation Lab, U.S. Government Accountability Office (GAO) and his colleague Stephen Sanford, Director of the GAO’s Center for Strategic Foresight identify a pivotal role for their organization when it comes to reviewing governance, oversight and ethical standards used in AI solutions. Below they highlight some of their institution’s actions and considerations in this area.

By Taka Ariga and Stephen Sanford, U.S. Government Accountability Office*

AI — moving from technological to ethical questions

In our digitally connected world, data are often referred to as the new oil because of how critical it has become to powering our information economy. The scale at which we collect, process, and understand data today is increasingly reliant on complex systems of algorithms and models to help organizations make real-world decisions. These artificial intelligence (AI) solutions are growing in ways that represent a significant force behind what the World Economic Forum has dubbed the ‘Fourth Industrial Revolution.’

The development of AI solutions so far has focused primarily on testing and deploying algorithms and automation as quickly as possible. Data scientists have been eager to apply evermore exotic machine learning techniques and leverage greater access to increasingly abundant data against complex challenges to quickly answer the question of can we? The emphasis on speed and accuracy accelerated the growth of AI’s footprint in our lives. However, sometimes this meant that governance-related considerations like should we? took a backseat.

We are now entering an important time when more observers and researchers are exploring the potential unintended consequences of AI algorithms. There is recognition that the capacity of AI to consume data and generate decision points not only carries many potential benefits but also could unintentionally create or magnify adverse societal consequences.

In her seminal book, Weapons of Math Destruction, Cathy O’Neil galvanized the data science community to pay more attention to important issues surrounding AI solutions such as biases, lack of transparency, and privacy. There are grassroots efforts among commercial providers, public sector organizations, and academic institutions looking at how best to establish and sustain robust AI governance frameworks. This marks an important pivot for the evolution of AI, where ethics is now established as one of the core responsibilities among data scientists and organizational stakeholders.

Reviewing AI on its governance

The trend of AI adoption in the foreseeable future is decidedly heading only in one direction-up. When the Comptroller General of the United States convened an expert forum as part of a 2018 technology assessment on AI, GAO explored the implications of AI’s use in high-consequence activities such as cybersecurity, financial services, automated vehicles, and criminal justice.(1) Our work, and the experts we spoke to, underscored the need for careful consideration of AI from a governance standpoint, including ethics, bias, explainability, and security. Without a governance structure, entities that develop, purchase or deploy algorithmic systems will face potential risks. This is where supreme audit institutions (SAIs) will play a pivotal role in the future.

For auditors, AI presents some important questions:

  1. How can SAIs purposefully integrate AI capabilities across audit activities in ways that move the needle towards continuous auditing and 100% sampling?

2. How will the SAIs-absent of a uniform policy and regulatory framework-approach oversight of AI solutions when, not if, they are asked to provide assurance?

The direction towards creating robust AI governance on algorithmic development is certainly encouraging, but it is not sufficient when it comes to addressing accountability. Trust but verify, after all, is a guiding principle for auditors. SAIs must look ahead and be prepared to audit algorithms in ways that include both empirical as well as inferential evidences to draw assurance.

GAO’s Innovation Lab and Center for Strategic Foresight

GAO is addressing both challenges head-on. Our Innovation Lab, established in 2019 as part of GAO’s new Science, Technology Assessment and Analytics unit, is driving AI-based experimentations across audit use cases. (2) Our Center for Strategic Foresight, established in 2018, conducts research and identifies near-future challenges and emerging issues affecting government and society. (3) This Center is part of GAO’s office of Strategic Planning and External Liaison, which directly supports the Comptroller General.

Our technology assessment on AI demonstrated the power of using the tools of foresight and technology assessments to examine potential future implications of AI developments. That study surfaced several important considerations which will have a potential impact on auditors. For example, access to data and data reliability takes on a whole new meaning in the realm of machine learning, where data has been used to train systems to make decisions with algorithms. An auditor may be asked to examine actions and outcomes emanating from a system where the training data may be absent, outdated, or biased.

Whether a particular algorithm will be explainable, or its decision-making process can be examined or repeated by an auditor, will also be a crucial question with dependencies on both technology and intellectual property. Moreover, the use of data for training algorithms raises issues of data governance, data sharing, and data security. The governance challenge is further amplified when AI solutions are considered proprietary, dependent on ‘blackbox’ techniques, or rely on commercially available models.

Developing an AI oversight framework

GAO is in the early stages of developing an AI oversight framework. We recognize the need to remain adaptive as AI capabilities evolve along with the importance of collaborating across an ecosystem of stakeholders to formulate best practices. There needs to be a contextual balance between conducting oversight without hindering continuous innovation. Table 1 below illustrates how we are thinking through different types of empirical and inferential questions for a risk-based assessment.

Table 1 — Governance related questions on AI

Shaping the future of accountability

Addressing oversight challenges posed by the breakneck pace of AI adoption means that SAIs need to build capacity now and start planning for any possible cultural disruptions. As a leader within the SAI community, GAO is embarking on a transformational journey through the Innovation Lab and the Center for Strategic Foresight, to help shape the future of audit and accountability within our overall strategic plan. Considerations for achieving such a transformation could include the three pillars as reflected in Table 2.

Table 2 — Identifying different impacts

Ushering in a new age of algorithmic accountability will require more from SAIs. They will need to reconsider their approaches to auditing data-driven automated systems from a culture, workforce, and infrastructure standpoint. There will likely be strong demand from legislatures and the public for accountability and assurance about governmental use of algorithms. SAIs need to be ready to answer this call. The field of AI is rapidly evolving, and its known and unknown effects will have a growing impact on government and society. SAIs need to start planning today for what tomorrow will bring.

*The views expressed in this article are solely those of the authors.

(1) Technology Assessment — Artificial Intelligence: Emerging Opportunities, Challenges, and Implications. GAO-18–142SP, 28 Mar 2018 (Washington, DC). https://www.gao.gov/products/GAO-18-142SP

(2) For more information and GAO’s science and technology work, see https://www.gao.gov/technology_and_science or https://blog.gao.gov/2019/10/29/our-innovation-lab-building-a-sandbox-for-audit-tech/

(3) For more information about the Center for Strategic Foresight, see https://www.gao.gov/about/what-gao-does/audit-role/csf/

This article was first published on the 1/2020 issue of the ECA Journal. The contents of the interviews and the articles are the sole responsibility of the interviewees and authors and do not necessarily reflect the opinion of the European Court of Auditors.