Original article can be found here (source): Artificial Intelligence on Medium
What does it mean to be alive?
It’s a simple question, but isn’t it really the most indecipherable?
Quickening. That’s an old old word. I heard the phrase “the quick and the dead” in church as a child. It was from a 16th century prayerbook.
Does “alive” mean “conscious?” It’s beginning to seem so. Once again there seems to be a kind of sea-change going on. We’re once again coming to understand that things that in our 20th century view of animals as machine-like automatons are actually beings, not so very much unlike us. We’re once again coming to sense that living beings in general are conscious beings.
But what does that mean? What is consciousness? Nobody really knows. All we really know is that we have it, and that possibly — probably — all of nature has it. And I venture to say that in our guts (“souls” was a word I’d once have used) we know that computers really do not. Despite Dennett and Kurzweil.
I believe there are really very few — not even really the creator gods, when you get down to it — who truly believe that a silicon chip-powered brain can be conscious in the same way natural life is conscious. From what I’ve read, it seems that there are some that believe that someday silicon can attain “a type of autonomy” but not that it is the same as the wet-life phenomenon we all know inside ourselves as “being conscious.” Their workaround for continuing onward to create artificial life — and to convince us that it’s a good thing — is to insist that in the future created silicon life won’t need consciousness — that “distributed comprehension” will be enough; that intelligence can do quite well on its own without consciousness — what Harari refers to as “decoupling.”
There are countless debates and schools of thought; philosophers mostly, but also neuroscientists and computer geeks, who spend entire careers warring over this stuff. But when it comes down to it, we’re no further than we were back when Pascal, then Descartes, made pronouncements about it. The modern philosopher Thomas Nagel also took a stab at it. He wrote a 1974 paper which seems to be endlessly referred to — I ran into references to it again and again when I was conscientiously reading about all this stuff, so I dug it up and read it.
And it’s pretty hard to understand. I took away these bits:
Nagel says he doesn’t “know what it is like for a bat to be a bat”, but that “if I try to imagine this, I am restricted to the resources of my own mind,” which are “inadequate to the task.” In what seemed to me to be typical philosophical gyrations, he says that “an organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for the organism.” This passage is always quoted when I read about Nagel and his Bat paper.
He seems to conclude that the “subjective character of experience” “is not analyzable in terms of any explanatory system of functional states, or intentional states” — or, really, anything, he seems to say.
We are left with — ourselves. That’s my takeaway.
In 1865, poet Walt Whitman wrote,
When I heard the learn’d astronomer,
When the proofs, the figures, were ranged in columns before me,
When I was shown the charts and diagrams, to add, divide, and measure them,
When I sitting heard the astronomer where he lectured with much applause in the lecture-room,
How soon unaccountable I became tired and sick,
Till rising and gliding out I wander’d off by myself,
In the mystical moist night-air, and from time to time,
Look’d up in perfect silence at the stars.
— When I Heard the Learn’d Astronomer, by Walt Whitman
Nagel’s “Bat” paper didn’t really move anything forward for me, and I kept thinking that this was really of import only as what was evidently a big volley in this interminable debate — war — amongst philosophers, but not really of much use in the real world.