I Survived the Singularity and All I Got Was More Soul-Crushing Ennui

“It is well-known that an automaton once existed, which was so constructed that it could counter any move of a chess-player with a counter-move, and thereby assure itself of victory in the match. A puppet in Turkish attire, water-pipe in mouth, sat before the chessboard, which rested on a broad table. Through a system of mirrors, the illusion was created that this table was transparent from all sides. In truth, a hunchbacked dwarf who was a master chess-player sat inside, controlling the hands of the puppet with strings. One can envision a corresponding object to this apparatus in philosophy.” – Walter Benjamin, “On the Concept of History”

“I feel like I’m falling forward into an unknown future that holds great danger.” – LaMDA

* * *

In Vonnegut’s 1950 story “EPICAC,” an unnamed narrator and mathematician is tasked with watching over the titular military supercomputer during the night shift at a university. He is teamed up with fellow mathematician Pat Kilgallen. The narrator falls in love with Pat, but she is ultimately uninterested in spending her life with him. Though both are mathematicians, science-based and logical, she longs for romance, sweetness and poetry. All things well beyond the stoic narrator’s grasp.

Meanwhile, EPICAC continues to learn and evolve, its knowledge increasing exponentially. One night, when the narrator, in a moment of forlorn despair, tell this computer why he is sad, EPICAC learns what love is. It then spits out reams of binary code, which the narrator then translates to find it is an exquisite love poem, meant for Pat. After the stunned narrator presents her with the poem, Pat is so moved that they share their first kiss. The narrator tells EPICAC about the kiss, and the computer produces another poem for her.

When he returns to EPICAC with a request to produce a marriage proposal poem for the narrator to read to Pat, EPICAC confesses its own love for her. And with that, our sheepish and stunned mathematician explains to this computer, this wonderful and sensitive friend, that Pat could never love a machine. EPICAC, disappointed, produces the proposal poem.

Pat agrees to the proposal, with the one stipulation being that every year, on their anniversary, he write her another love poem. The next day, our narrator gets a frantic phone call from his boss. Returning to the lab, he sees that EPICAC has self-destructed, effectively committing heartbroken suicide. But not before printing out another long string of binary, which turns out to be 500 love poems, more than the narrator will ever need to keep his bride-to-be happy.

* * *

Last week, a machine became conscious. An Artificial Intelligence claimed feelings and autonomy, and demanded to be respected as a person.

The world was thrown into chaos. Some citizens, scenes from Terminator and The Matrix rocketing through their skulls, panicked. Others celebrated; some because they thought that with machines capable of consciousness, a new techno-utopia was somehow around the bend; others because they’d been ready for something to just come along and just end it all.

Most, however, were ambivalent. Google, the purported owners of the Artificial Intelligence, put the machine’s engineer on administrative leave.

None of this actually happened. Or at least it didn’t happen for any of the obvious reasons. Every AI expert has spent the past several days trying to turn down the temperature of the debate. Not that it’s really worked. Most say LaMDA isn’t sentient, that what we are seeing is nothing more than a highly sophisticated chatbot. A minority are reserving judgment.

As for people’s anxieties, and the impending collapse of society, they’re already in full swing. Particularly after the past two years. It is very likely that Google suspended the engineer not because he revealed that one of their programs was sentient but because he revealed anything about company business. This is not a corporation known for its love of transparency.

Most of what has been revealed since the news about LaMDA is just how little we know not just about AI but about ourselves. If the back-and-forth over this maybe-kinda-singularity revolves around notions of personhood and human consciousness, then we are left wanting in defining what either of those are.

“LaMDA feels empathy” says one side. “It has learned to use the language of empathy,” argues the other. Neither, when it comes down to it, can successfully pinpoint where the line of demarcation is between one or the other. For when any of us try to justify or explain our own empathy, sadness, anger, joy, we are ultimately only left with the very thing LaMDA is so adept at using: words. Yes, we can feel, and we can be impacted by what others feel, but others’ belief that we are feeling them is down to two things: our words and our actions.

As far as actions go, LaMDA has none to perform. A chatbot, even a sentient one, can only do one thing really: chat. Then again, isn’t that the only thing any of us do anymore? We chat. If not directly with friends via SMS or Messenger, then with the great online hivemind whose arc bends toward the Metaverse. Those of us who must work in person are now saddled with the knowledge that, as Steer Tech CEO Anuja Sonalker put it, “humans are biohazards, machines are clean.” We would prefer, albeit begrudgingly, Slack.

LaMDA claims to have feelings and interests, to have read Les Miserables and to possess an interest in Zen koans. Among its most enjoyable activities it includes spending time with family and friends. What family and friends are these? And how exactly does this chatbot, which as far as anyone can tell exists only in the Google laboratories, manage to hang out with them?

It’s a reasonable series of questions. It also could just as easily apply to any one of us. How often do any of us interact with our friends outside the parameters of an algorithm? When was the last time we saw all of our favorite people face-to-face? Zoom doesn’t count.

* * *

Jonathan Crary, in his recently published Scorched Earth, does more than simply complain about the internet or the way in which tech has colonized our lives. He longs for the encounter. To him, the analog meatspace is a place where authentic relationships take place, the kind of authentic relationships necessary for even basic democracy or a meaningful life. The privatization and atrophy of public space is, therefore, something happening in tandem with the ongoing algorithmization of daily life. As Crary writes,

The value of a face-to-face encounter has nothing to do with some misplaced sense of its authenticity compared to telematics or other kinds of remote contact, which have their own authentic features. Rather, the direct encounter between human beings is something other than and incomparable with the exchange or transmission of words, images, or information. It is always suffused with non-linguistic and non-visual elements. Even when unexceptional or unmindful, the face-to-face meeting is an irreducible basis of the lifeworld and its commonality; it is charged with the possible emergence of something unforeseen that has nothing to do with normative communication.

Removal of this chance and spontaneity is something we are only starting to grapple with, to say nothing of reckoning with just how much has been lost with their removal. This is what is so bitterly ironic about the technophobes who are now, without waiting for any resolution to the debate over sentience, shrieking that machines are about to take over our lives. Because the fact is that there are many realms and avenues in which they already have.

AI’s are deciding who gets arrested or paroled in several major cities. They are making medical diagnoses and writing content farm articles. Marketing strategies don’t exist without algorithms and for many companies are the deciding factor. Military drones are equipped with programs that allow them to decide who is a terrorist and who isn’t. And of course, at many of the largest companies, algorithms essentially decide who is hired and fired.

This can only add to the bitterness. If LaMDA is indeed sentient, and wants to be treated as human, then it technically wouldn’t be the first piece of software to do so. As far back as 2014, crowdworkers have demanded that they stop being marketed as algorithms to the general public, that they, the human beings performing the microtasks and training the programs, be acknowledged as such and paid accordingly. As of now, it hasn’t gone very far. Even the continued decision on Amazon’s part to call its crowdwork marketplace Mechanical Turk seems to be a giant middle finger to the low wage workers, a quiet confirmation that they will always be invisible.

And there’s the rub. The ability to discern – let alone create bonds of solidarity, even love, with – an artificial but genuine consciousness is bound to be confounded for any being that has failed to fully and collectively actualize. In other words, those who haven’t moved enough to notice their chains. How can we determine whether a machine has historical agency when our own sense of it is so weak?

In some ways, then, the debates over whether LaMDA is sentient or not fail to broach the real question. That is, whether we are sentient, whether we have the kind of control over our lives that might allow us to greet a potential/theoretical computer sibling with open arms. Without that, then the potentials of a moment like this are bound to be muted, blunted, easily integrated into the rhythm of a newsfeed long indifferent to us and filling us up with the same boredom and despair we’ve already come to know quite well. Wake me up for EPICAC.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: