Under the Gaze of Big Mother – Slate

Original article was published by on artificial intelligence


Shasha Léonard

An expert on machine learning responds to Yudhanjaya Wijeratne’s “The State Machine.”

The world of software has a long-held, pernicious myth that a system built from digital logic cannot have biases. A piece of code functions as an object of pure reason, devoid of emotion and all the messiness that entails. From this thesis flows an idea that has gained increasing traction in the worlds of both technology and science fiction: a perfectly rational system of governance built upon artificial intelligence. If software can’t lie, and data can’t inherently be wrong, then what could be more equitable and efficient than the rule of a machine-driven system?

In “The State Machine,” Yudhanjaya Wijeratne explores a possible future where this concept has become reality. He takes the idea of A.I. government a step further by making it highly dynamic, with regular changes to the constitution and legal framework. Given how much of our lives are now in the hands of massive software applications—communications, banking, health care—I can see large swaths of humanity choosing to live under an A.I.-based government, rather than under human politicians, in hopes of more equitable treatment under the law and less overall corruption. It could happen incrementally, as it does in this story, so we go along with it, until one day a sizable portion of the world’s population finds itself living this way. You have only to look at Facebook, which now has 2.7 billion monthly active users (more than one-third of the world!), for a very real example.

Biases and emotions are built into software systems by the fallible and illogical people who design them.

Wijeratne’s fictional State Machine seems like a fairly benevolent overlord—more Big Mother than Big Brother. It’s attuned to the emotional and physical well-being of its citizens through a distributed network of real-world extensions (the emoji robots). Some of us might call them spies, but they’re only there for our best interests. The State Machine hands out flowers at first, and more strict measures when necessary, but it tries to maintain a “delicate symbiosis between machine input and well-intentioned social campaigns, setting forth in hard code a law that people who suffer must be taken care of.” It expresses neither fascist paternalism nor the blanket kindergarten rules of a nanny state. It’s maternal—nurturing but firm, like the Giving Tree, with an attempt to maintain healthy boundaries. Doesn’t sound so bad, right?

This dream crashes into reality via a basic fact: Biases and emotions are built into software systems by the fallible and illogical people who design them. We see this on a regular basis in the world of machine intelligence today. Whether the system is using supervised learning (where a human shows the machine what is right and wrong) or unsupervised learning (where the machine tries to categorize right from wrong on its own until the human says it’s done), the information provided to the machine must pass through a living gatekeeper. Sometimes that means the data fed to the machine is incomplete, as with recent problems in facial recognition. Sometimes it’s erroneous or lacks context, like the fun examples by Janelle Shane. Sometimes it’s a product of what that human expects or desires to see as a conclusion, as with racial biases reinforced in policing and lending. Artificial intelligence (much like fiction) reveals to us the inherent truths of our lives, not some greater glimpse of reality.

So what of the later stages of the State Machine as posited in this story? What happens if the software reaches a state of self-sustaining complexity such that humans only think they’re in the loop? Will that solve the problem of systematic bias? I would argue not. People, being the perverse creatures of willfulness that we are, would find a way to game the system. The historian protagonist of this story isn’t that type of person, but I’m sure the society he inhabits will have some. Those are the ones who will frustrate “Big Mother” and drive it to apply suboptimal or even harmful amendments to its laws. We’d end up facing the same problems we have with democracy today: insufficient protection for minority rights, susceptibility to demagogues, and lack of long-term thinking, to name a few.

One of the greatest challenges of raising a child is maintaining a balance between authority and compassion. Every parent exists somewhere on this spectrum, but I’m not sure there’s an optimal location, not for one parent and one child, not even for one given moment. Being human, we’re always searching for a perfect and guaranteed solution to life, but being inhabitants of a massively chaotic world, we’re never going to find it, and we don’t like to think of ourselves as predictable. I think the State Machine acknowledges this when it references the sensitivity to initial conditions. A sufficiently self-aware entity, whether human or artificial, is going to recognize its limitations. In this story, the State Machine realizes that it isn’t omniscient, that it won’t always make the best choices, which raises the question: Why is it a preferable way to govern ourselves?

An artificial intelligence that can truly understand our behavior will be no better than us at dealing with humanity’s challenges. It’s not God in the machine. It’s just another flawed entity, doing its best with a given set of goals and circumstances. Right now we treat A.I.s like children, teaching them right from wrong. It could be that one day they’ll leapfrog us, and the children will become the parents. Most likely, our relationship with them will be as fraught as any intergenerational one. But what happens if parents never age, never grow senile, and never make room for new life? No matter how benevolent the caretaker, won’t that create a stagnant society?

Growing up means taking responsibility for ourselves, including ownership of our mistakes. Big Mother will have to allow us to manage our own lives at some point—otherwise we risk smothering ourselves into a state of perpetual immaturity. We might need a machine intelligence to help us attain the adulthood of our civilization, but as with Wittgenstein’s proverbial ladder, we’ll need to throw it away in order to embark on the next stage of our growth.

Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.