Machine Learning and Ethical Risk Management: The introduction of the 1+1=4 Ethical Methodology.

Original article can be found here (source): Artificial Intelligence on Medium

Machine Learning and Ethical Risk Management: The introduction of the 1+1=4 Ethical Methodology.

By: Jody S. Johnson, J.D.

In the first chapter of Lewis Carroll’s 1865 classic, “Alice’s Adventures in Wonderland,” Alice follows the White Rabbit into his burrow, which transports her to a strange, surreal, and nonsensical world of Wonderland. When it comes to risk management and the finding of liability, the rabbit hole can seem “strange, surreal, and nonsensical.” This can be true whether one is trying to avoid liability or trying to figure out who to sue. In this brief article, I will analyze a real-world case study as it pertains to ethical risk management and introduce an ethical methodology I have coined the 1+1=4 Approach. Although this ethical framework has multi-disciplinary applications, which will be discussed in future publications, this article will focus primarily on its application within the field of machine learning and the race to find a vaccine for COVID-19.

In a recent article published by The Washington Post, the urgency and chaos of the race for a vaccine were analyzed and highlighted. The article reports that the urgency to find something — anything — for patients who have nothing other than supportive care has led researchers to pull everything off the shelf: a mix of existing drugs that show promise, stem cell treatments and brand new compounds designed specifically against COVID-19. The current state of affairs presents a unique set of ethical dilemmas, and forces us to consider whether we place a priority on finding a vaccine to COVID-19 over the safety of the people who are counting on researchers having their best interests at heart?

The following case study shows the emerging use of artificial intelligence and machine learning to address medical pandemics.

Case Study Overview

During a recent Bloomberg interview, Dave Turek, IBM Cognitive Systems Vice President of Technical Computing, was asked to provide insight on the role IBM is playing in the race to find a cure for COVID-19. (Link to interview) Turek referenced IBM’s construction of the largest supercomputer in the world housed at the Oakridge National Laboratory. Turek explained how this computer had the capability of merging concepts of artificial intelligence with standard mathematical representations of problems, thus giving the computer the ability to examine the validity of compound combinations and predict compounds combinations that warrant experimentation in the wet lab. Turek explained that the computer had run 8,000 possibilities and was able to narrow the validity combinations for testing to 77 within two days.

Many were amazed by the technological applications of the supercomputer. As Turek put it, the computer had done in days what would have taken years without the availability of modern technology. But what he said next made me reassess my short-lived amazement and consider the real-world implications of liability if the supercomputer had gotten it wrong.

Turek was asked about the organizational structure of the initiative, whether the initiative was an IBM initiative in concert with the government. He responded by stating the construction of the supercomputer at the Oakridge facility was the result of an IBM partnership with the Department of Energy, however, IBM just provides standard support for the operation of the computer. Turek further eludes that the problem is being worked by a scientist out of the University of Tennessee and the data researchers at the Oakridge facility. And with that statement, Turek had effectively absolved IBM of any liability associated with the outcomes of the experiments. So, who would be held liable if the computer got it wrong? What if the computer dismissed a combination that would potentially provide a vaccine?

So who would be liable: The University of Tennessee? The Data Scientist? The Data Engineers? The Oakridge facility? Can we rule out the potential of liability being associated with the entities most likely to hold the intellectual property rights once a valid chemical combination was derived? Both IBM and the Oakridge Research Facility will potentially be absolved from liability as they operate as an agent of the Department of Energy.

This leaves the scientist, the University of Tennessee, and the data engineers tasked with optimizing the predictive algorithms, holding the proverbial short end of the stick if things go wrong. This is a reality most small research groups and start-ups fail to consider when contracting with larger entities with substantial financial and authoritative resources. If things go right the larger entities own the intellectual property rights, and if things go wrong everyone will know who to blame. These are all considerations that warrant proper forethought during the risk management phase of your endeavors. Who is ethically responsible for adverse outcomes and where does the ever-deepening rabbit hole of liability end?

1+1=4 Ethical Methodology

My 1+1=4 Methodology is an ethical approach to risk management that aligns the risks and benefits of the agency, with the impact and potential outcomes relevant to marginalized populations. In other words, this ethical framework is a cost-benefit analysis that predicts and accounts for the potential liability related to the unintended consequences of optimization. The figure below provides a basic illustration of this concept.

The input represents the available data or selected datasets, which establish your machine learning model and allows for your predictive algorithm to be tested and validated. The output is what’s necessary to develop and implement your machine learning and AI system, i.e. what’s needed to create the optimal situation in which your predictive model can be deployed. This is your 1+1 component of ethical risk management, these are the tangible things that should be readily available for evaluation, and should pose minimal difficulty in assessing. The remaining components of the equation are a bit more subjective and require a meta approach since the results require a degree of predictive analytics that are less accessible. These components are best articulated through application to the case study mentioned above.

Expected Outcomes (EC)

Assume you had a magic lamp with a genie inside to grant three wishes, what would you wish for? This is analogous of this phase, minus the lamp and genie of course. The development of expected outcomes stems from our attempts to produce the outcomes we want from our predictive models, which consequently make this the component of your ethical risk management assessment the riskiest. Expected outcomes rely heavily on the results of the testing of your machine model, which in turn relies heavily on the data sets selected as inputs for your machine model. That’s right, your expected outcomes are derived from the outcomes of your machine model, which are derived from the data sets you input into your model. Inputs that are developed from your interpretation and manipulation of the available raw data. So why is this an inherently risky process you may ask. This is because it is innate in human nature to want to “get it right” and receive the fame and accolades associated with solving complex issues. So as a researcher, data engineer, or data analyst from which do you draw the greatest incentive: from the fame and glory of producing a predictive model that generates the desired outcomes, or from the publication of failed machine models that are ethically sound?

As applied to the case study, who has the most to gain from the exclusive selection manipulation of data to produce viable inputs? The scientist? The Oak Ridge data engineers? IBM? The truth is everyone associated with the project has the incentive to produce results and desirable outcomes. The harsh reality is that most actors participating within this project benefit from the lack of regulatory oversight, excluding the scientist of course, which means all ethical decisions are assessed internally and weighed against the interests of the organization. So where does that leave us regarding liability? In the study, the expected outcome is the production of chemical and viral compounds that result in a vaccine for COVID-19. It can be inferred that most ethical decisions and risk management will be vetted by the Department of Energy, the governmental entity associated with the Oak Ridge Facility. But do the interests of a governmental entity parallel those of the scientist generating the data set for input with the machine model? Does the Department of Energy have regulatory oversight and authority over the data sets developed and optimized for the predictive model? Does the ethical risk management assessment implemented by the University of Tennessee regarding the acts of its employee, the scientist, extend to the acts of machine modeling and optimization deployed by the engineers at the Oak Ridge Facility?

These complex questions require proper assessment within multilateral collaborations. Who is the authoritative body in charge of ensuring ethical applications and analysis of data? Our expectations, wants, and desires shouldn’t trump our ethics, but who should be tasked with making sure they don’t. Once we’ve muddled through the muddy waters of ambition and desires, we move to the proper assessment of the actual outcomes of our predictive models.

Actual Outcomes (AC)

In an ideal world, all things will go as planned, and your actual outcomes will be as expected. In this world there is no need for further risk management, no need for recalibration, your job is done… If only it were that easy. Good, bad, or indifferent we are tied to the outcomes of our decisions and must accept the outcomes as they are. This is the reality of any profession which utilizes predictive applications to affect future outcomes. This phase of your risk assessment will inevitably produce the most anxiety and chaos within your model developers. Things don’t always go as planned and your perception of the outcomes is crucial to the proper recalibration of your predictive model.

The truth is most predictive models yield unwanted outcomes often. There’s a saying I always share with my daughter about this phenomenon; I share with her that out of one hundred tries, you can and sometimes will get it wrong 99 times, but you will only get it right once. This is not only true within the machine modeling process, but is also a necessary factor to remember when applying ethical risk management. You will have outcomes that fall well outside of your list of desirable or expected outcomes. These outcomes, regardless of how we feel about them, are actual and impactful. But once tested and verified, what purpose do unwanted outcomes play? When applied to the case study, in particular, the segment where the supercomputer was able to assess and evaluate 8,000 possible combinations and matriculate the viable possibilities to 77, we must ask ourselves what can we learn from the other 7,923 possible combinations that exist. What purpose do these, albeit unwanted, actual outcomes serve? They provide datapoints of viable independent variables or options that can be used during optimization.

Although each combination yielded less than desirable outcomes relative to the predictive model seeking a vaccine, each independent variable, and the tested combination, provide benchmark data that could yield alternative outcomes during optimization. This brings us to another crucial point in your ethical risk management process that requires a hardline inquiry to assess the causal connections to potential future liabilities. Is your predictive model capable of assessing these independent variables; and if not, who is or will be held liable for inadequate optimization?

When applied to the case study, who will be held liable if the computer omits a particular independent variable, which results in the development of a vaccine that brings about the death of a patient? Who is responsible for the assessment of the actual outcomes produced by the machine modeling and resulting optimization? When pressing to produce efficient and effective optimization, we must consider the liabilities and cost of the inputs selected for use within optimization. This assessment must be conducted from the lens of the most vulnerable and most marginalized that could be impacted. I will go into more detail regarding the management of risk during the optimization process in the next section. At this point, just be mindful that your 1+1= EC+AC produces your need for optimization and wideness your net of potential liability.

Cost of Optimization (CO)

Before we begin our application of optimization to the case study, it is important to have a common understanding of what optimization is. For this purpose, optimization can be defined as the process of recalibrating your inputs and outputs to better achieve your expected outcomes. With this in mind, the optimization process of your ethical risk management assessment must include the potential consequential impact on vulnerable groups as part of your cost-benefit analysis. It is only from this lens that you can truly assess the cost of optimizing your predictive model and algorithms, to determine the expense of recalibration. This is the complex analytical piece of your process that can make or break your assessment of future liability.

Now that we have a uniform starting point, we can apply the optimization component of our assessment. As we weigh the actual outcomes of our models against our expected outcomes, it’s crucial to analyze the necessary adjustments through a lens of potential consequential impacts on the most marginalized and vulnerable populations that may be affected. This is the only way to accurately assess the true cost of the recalibration of your predictive model to achieve your desired outcomes. Though this may seem like a fairly simple process, the reality is this is where substantial liability is created and your cost-benefit analysis must rely heavily on the ethical foundation of your agency. In a recent lecture at Stanford University, Thomas Dimson, former Director of Engineering at Instagram, shared his insight on the volatility of optimization, sharing that optimization of one outcome and input has unintended consequences on another. In other words, the more you recalibrate and shift to attain one outcome you will inevitably have an impact on another.

Applying this component to the case study may provide a better example of the process. For the researchers and engineers developing the predictive model based upon the available data, which initially contained 8,000 possibilities. These possibilities would have included viral combinations that addressed other diseases with pathogens similar to COVID-19. Each variable combination produced an outcome that prompted analysis and narrowing that inevitably lead to the production of the 77 remaining combinations deemed viable. For every combination that yielded an outcome that promotes a particular biological response but not another, that combination was rejected, and the model recalibrated to exclude that set of variables.

So you may still be uncertain about the ethical implications of optimization and the societal costs associated with it, but let’s look at a possible scenario relative to the case study. (The following scenario is strictly fictional and intended solely for the use of application to the case study)Hypothetically speaking, say the recalibration and optimization yielded a particular biological compound that was capable of suppressing the virus, COVID-19, for 60% of the population of California. Meaning, the resulting vaccine could protect 60 of every 100 people who either contracted or were at risk of contracting the virus. This would be a significant achievement given the current state of the global pandemic. Now imagine, within the same scenario, that the production of the vaccine would require a chemical compound that is found in the insulin used to treat diabetes; and without this compound, it is impossible to produce insulin. Let’s assume that this information is known to the researcher and engineer at the time of analysis, prior to optimization. Let’s further imagine that this particular compound is incapable of being produced at a scale large enough for both the vaccine and insulin at the same time. Meaning only the vaccine or insulin may be produced. Optimization yields a viable outcome that includes this particular chemical compound.

Equipped with this knowledge what should the researcher and engineer do? What is the true cost of optimization? Are we to produce the vaccine at the expense of being able to provide insulin for diabetic patients? This is the complex lens that must be used when assessing the cost of optimization. Who are those most impacted by the unintended consequences of optimization, and what duty of care is owed to this marginalized population? Now with the CO component of our calculation complete, 1+1=EC+AC+CO, we now turn to the final and most abstract phase of our ethical risk management assessment, determining the liability of our unintended outcomes.

Liability for Unintended Outcomes

Now that we’ve explored the ethical dilemmas of optimization, we now move to the assessment of liability for our unintended consequences. During the current pandemic, as reported in the Washington Post, In a desperate bid to find treatments for people sickened by the coronavirus, doctors and drug companies have launched more than 100 human experiments in the United States, investigating experimental drugs. So what if the findings of your predictive model are used to justify the administration of an experimental drug or treatment? What is your liability for any negative effects and does the consent given by a patient in distress and despair absolve your liability for their harms? These are ethical calculations that must be assessed during your risk management process. Are you responsible for the decisions made by elected officials and medical experts, who enjoy privileged immunities due to their professions, for the decisions made based upon the data your machine model provides? In most cases, elected officials and medical professionals are exempt or are shielded from liability because of these professional privileges, but do these privileges extend to protect you and your agency?

Applying this concept to the case study, would the data scientist, data analysis, and data engineers be shielded from liability if 1 of the 77 viable combinations yielded a possible vaccine, and given the relaxing of federal clinical testing regulations, was fast-tracked to human trials and had adverse effects? What if the vaccine was tested on 100 patients and 50% of the patients had an adverse reaction and died. Would those responsible for validating the combination be responsible for their deaths? Liability is assessed through the tangible connection between causation and harm. With that said it is crucial that your ethical risk management assessment includes an analysis of the potential unethical use of your predictive data by those that have alternative motivators and influences. When data analysts and data engineers lose control of their data and its application, once it’s given to partnering agencies this transferences may not absolve them from the liability to those harmed from the unethical application of their data and predictive models.

Fin

To conclude, the application of an ethical risk management framework is vital to the accurate assessment of the liability we expose ourselves and agencies to when operating within collaborative efforts. The 1+1=4 Ethical Methodology not only allows for an assessment of risk, but places this assessment within a lens that examines the impact upon the most marginalized that could be affected. Given the current use and misuse of data to support decisions to combat the Covid-19 pandemic, it is essential to apply effective and ethical risk management tools to mitigate future risks of liability. As the number of agencies seeking the miracle vaccine widens, so does the net of liability of those developing the predictive models steering those efforts. The 1+1=4 Methodology provides an ethical starting point for accessing the liability for your models unintended consequences. The development of this framework and field of research is ongoing and I invite critical feedback for future publications and applications.