Don’t Fear Facebook Because It’s Evil

Original article can be found here (source): Artificial Intelligence on Medium

Early last fall, I put an old camera up for sale on Facebook Marketplace. For a very, very long time, it did not sell. By the time 2020 hit, I’d pretty much forgotten about it, and when I did think of it—like when I discovered an extra charger lying around—I certainly never expected it to sell.

But then, sometime in February, I got a Facebook message from a guy named Mike, asking if I’d part with it. Sure, I said; we discussed the price a bit, and I dropped it down from $300 to $200 because of some dust on the lens. Mike took a bit of time getting the money together, but a couple of weeks later he messaged me again, asking for my PayPal account so he could send me the $200. I complied, and he sent.

But then I looked at the PayPal notice: Mike had sent 200 Australian dollars. I live in Brooklyn, New York, and had assumed and expected I’d be paid in U.S. dollars. He and I quickly cleared things up—he came through on extra cash, and gave me a U.S. P.O. box to send to—but I was left wondering: How, in all those months and all those back-and-forth messages, had the marketplace set up by the world’s largest social media giant never gotten around to informing us that we were on opposite sides of the globe?

The answer, I’ve long known, is that Facebook is stupid. The algorithm—the vaunted algorithm, picked apart by media brands like pharaonic seers, condemned by political partisans for its obvious inherent biases, worshiped by digital yokels blinded by faith in coincidence—is a piece of shit that doesn’t really know what it’s doing.

Apparently, there’s some AI in it. Maybe there’s human involvement? Frankly, I’d bet there’s no single person at Facebook who understands deeply how it works or what it’s aiming for. Sure, it’s looking at a shitload of variables in order to fill your newsfeed with what it thinks you need—for instance, the closeness of your relationship with another poster, measured by how often you like their cat memes or how many seconds you hover over that video of their toddler falling off the swing set or how quickly you followed up on their reply to your reply to their status update about Bernie’s heart attack.

All those likes, all those split-second pauses in your mobile scrolling, all those tags and shares and “Fine, I’ll watch till the midroll ad” moments—sure, they inform the length and volume of the spaghetti the algorithm tosses at you. But to what end? It’s still half-cooked spaghetti, some of which you may like, some of which you may scroll past without a nanosecond’s hesitation. The algorithm doesn’t have to be smart, doesn’t have to truly anticipate your needs. It just has to get lucky occasionally, often enough that you’ll marvel that it stacked two posts about herring—one from your friend who braves the Sunday morning line at Russ & Daughters, the other from your aunt who just went to Denmark—and ascribe to it a level of brilliance it does not deserve.

At best, though, it’s just guided randomness, a procedure that does not really understand you—that isn’t capable of wanting or being able to understand you. It’s the shittiest of AIs, marking accidental bullseyes as intentional triumphs, like 10-year-olds playing billiards.

We saw that in full effect in the past few hours, as a “bug” in the algorithm blocked tons of legitimate posts about the coronavirus and a million other subjects:

OK, so it’s a problem with the anti-spam system. Whatever! The fact that Facebook pushed an update today that threw their entire system out of whack is bad enough—it speaks to the lack of direction applied to the AI. What is it supposed to do?

Of course, it’s probably not supposed to do anything. I’d guess it’s built on some kind of constant-feedback system, where it surfaces posts (and ads) based on one user’s criteria, then measures how they interact and adjusts the criteria for the next round, ad infinitum. That definitely makes sense, and when it works, it has a way of making the system seem omniscient.

But when it doesn’t, it exposes the algorithm’s spaghetti-wall underpinnings. It just doesn’t really know what it’s trying to do!

Should you be afraid of that? It depends. Imagine if the algorithm truly knew what it was doing—that it was trying to get you to think, or click, or behave a certain way. Imagine it wanted you to love everything in your feed. Imagine it wanted you to vote for Donald Trump. Imagine it wanted you to buy that backpack. If it did, well, then I could dismiss it. I’d know what I was up against. I’d have an opponent, a foil, a sensibility to measure against my own. When a truly smart algorithm showed me a news item I loved, I’d appreciate it; and when it sent me crap, I’d condemn it. But it least it would have a point of view to throw my own into sharp relief.

Instead, we’ve got deeply banal weirdness: a mix of pretty on-target posts, a whole lot of things that don’t matter to us and we barely even notice, and occasional items and actions that have us up in arms over their incongruity. Where do I find myself in that? Are we all just the victims—no, the experimental subjects—of a poorly coded AI? Why do we allow such a crummy system to determine our emotional highs and lows, our flows of news and information and entertainment?

Because, of course, what else is there? Where else you gonna go to keep in constant low-level touch with everyone you ever met, from your sixth-grade teacher to your ex-boyfriend’s cousin? I love Facebook. I spend a ton of time there. I get a lot out of it, and I try to help my friends get a lot out of it, too. But still, it frustrates me that I live out my days inside a system whose inherent jankiness is easily exposed with a glitch they’ve likely fixed in the time I spent writing this story.

OK, now I’m done. Time to post this on Facebook!

P.S. I’d love to be proven wrong about all of this! People of Facebook, HMU if you can show me how the algorithm(s) is (are?) truly guided.