No One Wants to Talk About How Completely We Were Lied to

No One Wants to Talk About How Completely We Were Lied to

This is an op-ed and represents the opinion of the author.

Two days ago, the New York Times published a comprehensive story on the degree to which Facebook fought all attempts to investigate its role in spreading Russian hoaxes and lies, both as it pertained to the 2016 election and well beyond. The story opens with Sheryl Sandberg, COO of Facebook, angrily accusing the then-security chief Alex Stamos of throwing the company “under the bus.” His crime? Being honest with Facebook board members about the fact that FB had failed to contain or even understand the deliberately inflammatory and false content shared on its service by agents of a foreign power in a deliberate effort to divide the American people. Its portrait of her and Mark Zuckerberg does not improve in the later paragraphs. This took place in September 2017, almost a year after the election and long after credible reports about the role social media had played in disseminating actual fake news had surfaced.

The NYT story makes clear that multiple top Facebook executives fought to prevent disclosure of its problems to the public. They fought to prevent the investigation of events, fearing that knowing something about what had happened would expose them to legal vulnerability. The company sponsored legislation favored by the GOP in the hopes of winning approval on Capitol Hill and asked Senate Minority Leader Chuck Schumer (D-NY), whose daughter is a Facebook employee, to intervene on its behalf.

We know why it did. Facebook fought like hell to avoid being held accountable for its actions because the company has abjectly failed to provide anything like security or privacy to its hundreds of millions of users. It recklessly shared data with “trusted partners” and then did nothing to hold those partners accountable for how they used the information — because acknowledging it had breached consumer trust would have made Facebook look bad. Zuckerberg’s testimony to Congress and his pledges to improve are so trite at this point there are entire articles devoted to how he’s been making literally the same promises for over a decade. As problems mounted at Facebook, Sandberg reacted by going on the warpath, including contracting with a GOP opposition research firm that sought to link the activities of activists protesting Facebook’s behavior with George Soros in a despicable exploitation of antisemitic dog whistling. Zuckerberg’s defense? He didn’t know about it.

Facebook’s overriding concern, once it realized it had been used as an instrument of Russian propaganda, was to cover its own ass. That its cowardice is unsurprising is a testament to how low the bar has been set for acceptable corporate behavior. As despicable as the cowardice is, it’s the lying that sticks in my craw.

The Lies of Silicon Valley

There were a lot of lies, to be clear. Most of them have to do with algorithms and the promise algorithms held for creating not just growth, but good growth, in every sense of the word. This trend predates Facebook and the blame for it cannot be laid solely at Facebook’s door, but Facebook exemplifies its most noxious outcomes. In his many speeches, Mark Zuckerberg has repeatedly declared that Facebook’s overriding goal is to foster community and closeness, to bring people together, to connect them. This sounds fundamentally good and wholesome when considered in a personal context.

But that’s not what Facebook is today. And when confronted with that reality over the past two years, Mark Zuckerberg and his fellow executives repeatedly refused to face it. As of August 2017 — before Facebook killed its News Feed — 67 percent of Americans reported getting at least some news from social media, with 47 percent answering that they “often” or “sometimes” got news in this fashion. But Facebook had no policy on how to handle disinformation campaigns until 2017 at the earliest, according to the NYT, because no one at the company had bothered to consider the question. This is consistent with Zuckerberg’s repeated insistence that Facebook was a platform, not a publishing company, even as its social reach grew to dwarf even the largest publishing companies and absorbed large amounts of the traffic previously directed to those various sites. The fish rots from the head down.

It should also be noted that Facebook has no problem claiming to be a publisher when that position suits its legal interest. It only abrogates its responsibilities when they threaten to require some degree of actual work.

No One Wants to Talk About How Completely We Were Lied to

The social media website most fundamentally devoted to sharing never bothered to consider the ethical and moral ramifications of what its users might share. It never stopped to consider if the same mechanics exploited by demagogues and dictators to gin up hatred and fear might be applied to its own platform. It was happy to chase user engagement metrics for Wall Street but had no time nor, apparently, a scrap of funding to devote to the concept of building a platform that amplified accurate information. And at the same time it was gleefully pitching its own dedication to the concepts of sharing and connectedness and positioning those ideals as fundamental to the improvement of humanity, it was handing over the personal and private data of its users, selling ads in flagrant violation of federal law, and serving as an extension of Vladimir Putin’s fucking foreign policy.

The rot and the lies were not unique to Facebook. There’s good evidence that YouTube’s algorithms actively encourage radicalized viewpoints by pushing increasingly more extreme content at viewers, regardless of the topic in question. Sociologist Zeynep Tufecki tested this theory with a variety of subjects, including Donald Trump, Hillary Clinton, vegetarianism, and exercise. In every case, the same pattern held true. Watching videos on a topic leads to yet more extreme videos on the same topic:

Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons. It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

As Tufecki notes, Google is unlikely to be trying to radicalize YouTube viewers. I don’t believe Facebook was, either. Instead, the explanation is that by serving up more extreme content, you encourage people to keep clicking, keep watching, keep spending time on site. What’s the best way to do that? Serve them something more exciting or incendiary than what they started with. YouTube, Facebook, Twitter — these sites haven’t simply added to online discourse, they’ve fundamentally reshaped it, both in terms of where it happens and how it plays out. The most depressing thing about the 2016 election, to me personally, wasn’t the victor (or the defeated candidate). The most depressing thing about the 2016 election was the number of people who thought a badly shot, unsourced YouTube video from whatever meatsack they personally favored constituted proof of evil activities carried out by either Donald Trump or Hillary Clinton.

We Already Knew Enough to Warrant Caution

There is an argument that lifts the burden of blame from Facebook’s shoulders by arguing that these outcomes were unpredictable or unknown. This is untrue. The study of propaganda campaigns and how they spread is decades old. In his seminal series The Coming of the Third Reich, historian Richard J. Evans spends no small amount of time discussing how the Nazi Party largely created the modern propaganda machine. These techniques were further refined by the Soviets, Chinese, and yes, the United States. While the amount of research focused on the intersection of social media and propaganda has exploded since 2016, there were forerunners, like the Computational Propaganda project, that set out to analyze how algorithms, automation, and computational propaganda impacted public life beginning in 2012.

The internet was not the first revolution in human communication. It also wasn’t the first technological invention to usher in widespread social change. The invention of the printing press in 1439 is widely accepted as a key factor in the Protestant Reformation of 1517. The invention of radio and television transformed public and civic life — and not always for the better, as the work of Evans and others shows. No, Facebook had no way of knowing the exact particulars of what it might unleash — but more than enough information existed to show that the company ought to behave cautiously. It did not. It was easier to focus on pushing growth than to consider where and what that growth might be coming from.

The Necessary Difficulty of Hard Decisions

Facebook’s decision to focus on growth as opposed to difficult questions of content curation raises thorny First Amendment questions, but the company was never acting in a value-neutral manner. By choosing to take no action at any point during its own early boom or replacement of much of the traditional news media, it chose to advance a set of values in which truthful, accurate reporting was easily replaced with flagrant displays of bullshit. The platform was candy to those seeking to earn a buck with no regard for the truth of the information they peddled.

Here is where many conservatives may object. The idea of a single company with as much power as Facebook making decisions about what content is true or false will fill many with existential dread. I share that concern. But Facebook’s refusal to be honest about the types of content it allowed to spread across its network, its failure to be honest with either its users or itself about the ways in which its own platform had been exploited prevented that discussion from ever taking place. There has never been an honest accounting with the many and various ways that algorithms can be used to destroy people’s lives or warp their perceptions of the truth, in no small part because Silicon Valley corporations have fought like hell to prevent anyone from understanding the truth about just how badly they’ve collectively screwed things up. Now, instead of discussing how to best protect social media platforms from disinformation threats before they happen, we’re collectively attempting to solve the problem after colossal damage has been done.

Growth was easier. I don’t doubt it. Life is always easier when you ignore your moral and social obligations.

I’m not going to pretend that Facebook wouldn’t have faced a lot of difficult, pointed questions about how to balance a requirement to represent reality in some form. Finding answers to those questions in ways that allowed the company to grow without turning it into a premier platform for lies of every sort might have curtailed its growth in the early days. But this is not the first time new and emerging platforms have had to deal with the problem of fake news. Historians and the history of the press in the United States have a great deal to say about how to balance various political viewpoints and the responsibility of accuracy. These are problems that others have grappled with before. Facebook, in its cowardice and arrogance, refused to recognize the responsibility of the position it had seized in American life.

The Burden of Deceit Lies Upon the Liar

There is a third argument I want to briefly touch on — the idea that users “should have known better.” There was, in fact, some warning of how poor a steward Facebook would be. The company steadfastly refused to consider itself a publisher or to accept the rules that publishers are forced to adopt when deciding what content they will run. Its repeated privacy failures and rampant data collection were well known. The company has been embroiled in more controversies related to privacy than I have space to review.

But “should have known better” has its own legal and moral limits. Should users have known that Facebook’s general lack of regard for privacy extended to not bothering to enforce its own rules regarding how user data was shared? Should users have known that Facebook’s decision to regard itself as a social platform extended to having absolutely no plan for how foreign actors might exploit it to spread lies and hoaxes? The legal and moral burden of lying does not fall upon the victim. Facebook — an organization comprised of tens of thousands of people with hundreds of billions of dollars of wealth — had every resource available to the modern world with which to understand the position it created for itself. The company did not simply refuse to consider these questions, but chose, at every opportunity, to run away from them as hard and fast as possible, dodging any question of social, moral, or civic responsibility in favor of focusing on profit.

We have been lied to, collectively, by Silicon Valley. Whether you personally believed the lies or not has no bearing on the moral responsibility of the people who told them. We were told that algorithms, stickiness, and connectedness were innate, intrinsic goods that would lead to better outcomes and improved understanding for everyone. We were told that the people in charge of these companies had our own best interest at heart and that the platforms they created represented social goods that could impact the world for the better. In many cases, these ideas were tied to the myths humans love to tell themselves about computers and AI — namely, that these services can escape the biases and subjective opinions of those who create them.

If that were the sole lesson here, Facebook would be a cautionary tale of hubris. But there is no way to read the NYT’s latest, combined with the company’s fundamental refusal to grapple with the impacts of its own behavior in scandal after scandal, as anything but a fundamental breach of civic trust and basic moral judgment.

Continue reading

No Flying Cars Yet, But How About a $300 Toaster With a Touch Screen?
No Flying Cars Yet, But How About a $300 Toaster With a Touch Screen?

As 2020 draws to a close, there's still no word on flying cars, but don't worry: We found something even better. For a certain definition of the word "better."

Intel Is Spreading FUD About Supposedly Huge Ryzen 4000 Performance Drops on Battery
Intel Is Spreading FUD About Supposedly Huge Ryzen 4000 Performance Drops on Battery

Intel believes it has presented evidence that negates the value of AMD's Ryzen 4000 product stack. Intel is mistaken.

Early Adopters of Apple M1 Macs Should Be Cautious About Compatibility
Early Adopters of Apple M1 Macs Should Be Cautious About Compatibility

Apple's new MacBooks and Mac mini have made waves, partly thanks to the new silicon inside of them. Apple's new ARM ecosystem, however, is not without its growing pains.

Report: Apple Ignored its Partners’ Repeated Violations of Chinese Labor Laws
Report: Apple Ignored its Partners’ Repeated Violations of Chinese Labor Laws

Apple has reportedly turned a blind eye to repeated violations of Chinese labor law in its partners' factories over the past six years.