Eyes on the prize of Responsible AI

Original article was published by Sean Singer on Artificial Intelligence on Medium


Eyes on the prize of Responsible AI

By Sean Singer and Steven Mills

The presentation of the Nobel Peace Prize to the World Food Programme (WFP) earlier today capped a week honoring those individuals and organizations who have, as Alfred Nobel phrased it in his will, conferred the “greatest benefit on humankind.”[1] Each awarded Nobel medal bears the visage of its namesake on the front side, with the reverse side image depending on the prize’s category. In 1996, however, the Academy commemorated Nobel’s life with a design that, on the reverse side of all medals, bore the image of a train tunnel. Why a tunnel? Because it had been blasted open by its namesake’s most famous and transformative inventions: dynamite.

Dynamite changed the world by expediting the construction of railways, canals and, roads. [2] Writing in Scientific American in 1911, one observer noted: “The improvement in blasting explosives during the past two decades has been one of the principal factors in making possible the fast and comfortable trains now running on almost every railroad.”[3] Dynamite’s legacy, captured on the commemorative medal, reflected Nobel’s transformative impact.

That legacy of technological progress came with a human price that didn’t even spare the famed industrialist’s family. Nobel’s younger brother Emil died in an explosion at a family-owned factory in 1864.[4] By 1866, a string of deadly accidents led to bans on the substance in countries around the world.[5] Oftentimes, dynamite’s victims were among society’s vulnerable populations. In the United States and Canada, for example, Chinese immigrants building the transcontinental railroads paid for the misuse of dynamite with their lives as a result of countless blasting accidents.[6]

Just as dynamite opened up new avenues of transportation, communication, and commerce in the late 19th and early 20th centuries, Artificial Intelligence is transforming the provision of knowledge, goods, and services in the 21st century. It has the potential to shorten the journey to a vaccine, a medical diagnosis, a product, or a new employee. But like dynamite before it, that potential comes with the risk of causing tremendous harm — particularly to already marginalized communities. AI systems have denied credit, rejected job applicants, or infringed upon individual liberties as a result of embedded gender and racial biases. But unlike dynamite, AI is not a single, tangible product. It’s a collection of software code built by a diffuse global community, which makes managing its’ risks a tall order, particularly since the risks are not always obvious. The danger is easily recognized when the fuse is lit on a stick of dynamite. AI systems are opaque and often run quietly in the background, many times without the user being fully aware. The risks are still present: The user just can’t see them.

As artificial intelligence assumes a more central role in every aspect of business and society, organizations have a responsibility to protect people from harm whether physical, emotional, financial, or otherwise. While there are novel aspects of this technology, there are historical precedents, Alfred Nobel’s inventions among them, that can guide our thinking about Responsible AI. At the turn of the 20th century, Sweden created an Inspectorate of Explosives with the requisite technical expertise to assess risks and identify safety measures[7] associated with the use of dynamite. Those early attempts to assess the risk of explosives have evolved into exhaustive matrices and checklists that guide users to ensure the safety of workers and the general public when dynamite and other explosives are used.

A similar approach holds for AI. That’s why we created an assessment tool called RATE.ai to surface points of risk in use-case implementation. Our teams conduct a risk assessment for each use case based on the scale and severity of harm in the event of system failure. As part of that assessment, the team completes an in-depth questionnaire that forces team members to think through how the project intersects with our Responsible AI principles and policies. By identifying implementation risks early, our teams can monitor and mitigate them proactively.

At the most fundamental level, we’re changing the way we write code. We’ve custom designed modular, accessible functions to combat different types of biases in data. We’ve trained our data scientists to conduct unit tests and integration tests, utilize fairness metrics, test edge cases, and use specific packages that enhance auditability. Finally, we’re ensuring that our data scientists have the institutional support necessary to adhere to our Responsible AI principles. This support includes expanding the scope of our internal technical QC and audit team to guide our teams towards Responsible AI solutions.

While the work we’ve undertaken to ensure responsible AI may never merit a Nobel Prize, it is important nonetheless. Organizations implementing AI solutions have an obligation to their shareholders, their customers, and our society to ensure they do so in a responsible manner. The most exciting work is yet to come, as we begin working with these organizations to help them design and implement Responsible AI programs that address their unique values and needs. Gamma’s journey will continue as well, building connections and shortening the distance between us and our clients, and between our clients and their solutions. We believe this can only be done with Responsible AI.

[1] https://www.nobelprize.org/alfred-nobel/full-text-of-alfred-nobels-will-2/

[2] Kate Morgan, “How Dynamite Shaped the World,” Popular Mechanics (26 May 2020).

[3] Willard Young, “Dynamite as a railroad builder,” Scientific American (24 June 1911).

[4] https://www.nobelprize.org/alfred-nobel/alfred-nobel-life-and-philosophy/

[5] Josefin Sabo and Lena Andersson-Skog, “Dynamite Regulations. The Explosives Industry, Regulatory Capture and the Swedish Government 1858–1948,” International Advances in Economic Research (2017) 23: 191–209.

[6] See Gordon H. Chang, Ghosts of Gold Mountain: The Epic Story of the Chinese who built the transcontinental railroad (Houghton Mifflin Harcourt: Boston, 2019).

[7] Sabo and Andersson-Skog, “Dynamite Regulations,” 197.