Delete your account

It’s a “great shame,” says Silicon Valley insider Jaron Lanier, that so much of big tech’s AI has been aimed at manipulating you.

Critiquing social media these days is a bit like shooting fish in a barrel. Singling out the almost mindless narcissism is an easy target, as is pointing to the social media users’ susceptibility to fake news, whether it’s coming out of Russia or Fox News. And with Facebook’s Cambridge Analytica scandal, even the most quotidian social media user—the ones who gleefully post photo updates, funny memes, or mild political rants—are starting to rethink their relationship with these platforms, with many (1 in 10 Americans, according to a Techpinions survey) deleting their accounts in protest.



But precious few have considered their relationship with social media—or sought account deletion—with the seriousness that Jaron Lanier has. The virtual reality pioneer, musician, and author has been around Silicon Valley for much of his 58 years, and has consulted for a number of its giants (he is currently a researcher affiliated with Microsoft). He’s also spent much of that time worrying, loudly and eloquently, about the risks of the things those companies make.

His new book, Ten Arguments for Deleting Your Social Media Accounts Right Now, examines how a technology designed to bring people together (remember Mark Zuckerberg’s ongoing dream of “connecting” the world) has instead helped tear apart humanity’s delicate social fabric. People, he argues, are becoming angrier, less empathetic, more isolated yet tribal, and sadder, crazier even. With every post and scroll, users feed a system built to influence behavior, in a sort of reward feedback loop. And as the 2016 elections demonstrated, the same system that’s used to sell you deodorant online can also be hijacked to wreak havoc on your political system. Lanier, who hasn’t been on social media for years, now likes to refer to Facebook and Google as “behavior modification empires.”

“How did we get here, and how did we end up creating this mass surveillance system and applying principles of psychology to manipulating people all the time when what we set out to do was create a more open society for the benefit of everybody,” Lanier says in a phone call, sounding slightly deflated. “How did this thing go so wrong?”

Lanier says he has long been interested in how the internet could be used to control people. Back in 1995, he published an article titled “Agents of Alienation,” arguing that “agents”—which is what AI bots were being called at the time—would get to know people by hanging out with them, so to speak, figuring everything out and delivering custom content to the user.

“If info-consumers see the world through agent’s eyes, then advertising will transform into the art of controlling agents, through bribing, hacking, whatever,” Lanier wrote, presciently. “You can imagine an arms race” between armor-plated agents and hacker-laden ad agencies. Lovely . . . An agent’s model of what you are interested in will be a cartoon model, and you will see a cartoon version of the world through the agent’s eyes.”

HOW IT ALL WENT WRONG
Lanier began noticing Silicon Valley’s dabbling in behavior modification around 1992. In a sense, it was a problem with the web from very early on.

As Lanier recalls, when Tim Berners-Lee proposed the World Wide Web, with the HTML protocol—web pages that sat atop the raw internet—the thing that stood out as different from other similar systems that were being considered is that there weren’t universal back-links (two-way links). Thus, if someone created a link on an HTML web page, the linked-to page didn’t know it was being linked to at all.

“Therefore, the whole design kind of shifted away from selling stuff online, because there wasn’t any way to know who to pay, and instead suggested this alternate ad model,” says Lanier. “From the very beginning there was a lot of talk about maybe having these programs then that would direct people to things. In those days, instead of bots they were called agents, which would create content for people and direct people here and there.”

In retrospect, he wonders if maybe the internet’s design was made too minimalist, too focused on entrepreneurs, when it should have provided a way to represent individual people, to support transactions, to offer some basic data storage for users, and to provide history and authentication functions.

“Maybe some of that stuff actually should have been in there from the start rather than leaving it to commerce,” says Lanier, as it created “monopolies that have to figure out a business plan, and this business plan has been this very awkward and destructive one.”

Years later, for instance, one Stanford-born company would capitalize on the web’s minimalist architecture—and specifically an inability to tell what was linked to what—to build a powerful search engine, and an empire. “You can say, ‘Oh, I’m on this page, what pages point to me?'” Scott Hassan, the little-known third cofounder of Google tells Adam Fisher in his new oral history of Silicon Valley, Valley of Genius. “Right? So Larry [Page, another cofounder] wanted a way to then go backward to see who was linking to whom. He wants to surf the whole net backward. . . . So what Larry did was he started writing a web crawler.”

The business plan that now dominates much of the web is, as Lanier sees it, a conflict between two different ideologies that Silicon Valley entrepreneurs and developers love. On the one hand, it is an almost socialist sensibility that holds that everything should be totally open and shared, like Wikipedia and free software. On the other hand, there’s an admiration for beloved tech entrepreneurs and walled-garden builders like Steve Jobs. The compromise between these two ideologies is the ad-based model. The Google founders, for instance, were opposed to this model when they built their first search engine, but they later relented on the basis, they claimed, that ads could be relevant and helpful to people.

Lanier thinks this arrangement gave us the best and worst of both worlds, where on the surface it seems like everyone is being open and sharing, but in the background a “hidden machine” is running that makes money off of manipulating and playing with people who are being open. He cites internet gambling sites as pioneers in behavior modification, like real-life slot machine makers before them. That said, other companies, including video game developers and pornography sites, have also dabbled in behavior modification to make addicts of people—a power that has since been refined by social media platforms and their machine learning algorithms.

“As that thing evolved, and everything about it got more sophisticated, it turned into this weird era of surveillance and manipulation where we sort of don’t trust anything anymore, and everything is crazy and cranky,” Lanier observes. “I think that’s how we ended up here.”

Still, the system is far from perfect. As Lanier points out, the results Google, Facebook, and others achieve are still very small and barely better than random. Facebook can glean from posts and likes, for instance, that a user leans Democrat or Republican, and cares about particular issues like the environment or gun rights, but many of the portraits that our data dossiers paint are cartoonish at best.

“[The results] are cumulative, so they have been able to destroy our trust in elections and that sort of stuff,” says Lanier. “The damage is very real, but I’d say it’s premature to say that anyone has mastered [the social media manipulation machine]. It’s still kind of crude.”

That crudeness is ironic, of course, given Google’s accomplishments in artificial intelligence. DeepMind, the company’s London-based startup, has designed algorithms famous for mastering the game of Go among other intricate challenges. Of course those lofty ambitions belie more humdrum use cases, like recommendation engines that pull up the next funny cat video.

“One of the things that saddens me is that probably the majority of applications of AI and machine learning is the manipulation of people for this kind of stuff,” Lanier laments. “And that’s a great shame.”

Still, with ever-increasing hordes of data, recommendation engines and other psychologically tuned algorithms have advanced in dramatic and sometimes strange ways.

“If you look at some of the most heavily viewed YouTube videos, a lot of them were designed for very little kids, and they will watch them over and over again,” says Lanier. “And the feed will often drag the kids into endless variations of the videos, and each one will have 35 million views. The problem is the feeds will gradually pull in other things like weird stuff that will be more manipulative, perverse, or dark, and what does that all do to kids? Honestly, we don’t know.”
Lanier’s referring to reports last year about a horde of bizarre YouTube videos aimed at children, sometimes with simulated violence and off-color content. Some titles feature innocuous words and phrases like “learn colors” and “Halloween for kids”—keywords that make it more likely that the videos pop up in searches and the algorithmic Recommended feed.

While these videos are illustrative of behavior modification, Lanier is more interested in small, pervasive, and controlled changes to a population. The manipulation of political beliefs on Facebook in particular is particularly troubling to him.

“If you can make just a small percentage of people reliably cynical during an election to have a small change on the vote, and you can do that consistently, you can disrupt the society in a negative way,” Lanier says. Microtargeting users with misinformation may not necessarily influence an individual user’s behavior, but in the aggregate, over a large enough sampling, the right kind of message sent to the right kind of users—the kind prone to share those messages—could have subtle but lasting impacts, especially during a tight election.

“And that’s essentially what’s happening in the world. But that’s different from potentially worse things” that could happen.


Like many others, Lanier was appalled in 2014 when Facebook researchers described manipulating 700,000 users’ news feeds in order to better understand users’ emotions. Without “informed consent,” the experiment broke basic ethical rules regarding tests on human subjects, leading Facebook to formulate new controls for its research. But last year, the company came under fire again after an internal document said that the platform could detect teenage users’ emotional states in order to better target ads at users who feel “insecure,” “anxious,” or “worthless.” (The company responded that it does not do this, and that the document was provisional.)

Even without microtargeting or psychographics, social media can have even more appalling effects. Facebook has been widely criticized for the way that fake news and hate speech on its platforms has led to violence, especially in places with few mainstream news sources to counter misinformation. In India, false rumors about child kidnappers spread on Facebook’s WhatsApp has been implicated in two dozen fatal mob killings. In Myanmar and Sri Lanka, Facebook posts have been implicated by UN observers and police in dozens of fatal mob beatings and arsons.

Lanier doesn’t deny the positive effects of social media, like activism efforts (see the #Enough walkouts organized by high schoolers advocating for gun control) or using it to stay in touch with friends. But he also thinks there is ample evidence—to the point where it should be beyond debate, in his mind—that social media’s net effect is more negative than positive.

“For the individual person it might be more positive than negative,” he says. “There are some lucky people who get some more good stuff than bad stuff, and good for them.”

GETTING TO KNOW YOURSELF
Lanier won’t argue that...

Read The Full Article

 

0 Comments Write your comment

    1. Loading...