YouTube, Like Facebook, Ignored Toxicity Warnings in Favor of ‘Engagement’

YouTube, Like Facebook, Ignored Toxicity Warnings in Favor of ‘Engagement’

Image credit: Chris McGrath/Getty Images

Since the 2016 election, social media companies have faced the beginnings of a reckoning for their complete failure to protect the security or privacy of their users and to protect their platforms from being exploited by foreign actors, scam artists, and conspiracy theorists. Facebook has been hammered with scandal after scandal, but YouTube has scarcely come away unscathed. Investigations have found troubling links between the company’s algorithms for determining what shows up next in your video feed and the rise of extremist content online. The company has been particularly criticized for its failure to properly monitor children’s programming.

A new investigation from Bloomberg shines a light on the situation and confirms that YouTube executives and employees have been aware of these problems for years. Every attempt to actually fix these issues by employees was shot down by the executives themselves. Why? Because any attempt to address the issues could harm user engagement.

“Scores” of people within YouTube and Google attempted to raise the alarm about the huge amount of lies, deceit, and toxic content proliferating across the platform, the report said. A number of proposals were drawn up to address these issues. All were denied.

The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

YouTube has only recently begun to crack down on individuals who spread false content. It recently announced it would de-monetize anti-vaccination videos as a way to limit the spread of disinformation concerning the effectiveness and safety of vaccines. According to Micah Schaffer, who began working at YouTube in 2006 and stayed for several years, the site now permits content it never would have allowed before. Regarding the spread of anti-vaccination content on the site, Schaffer said: “We would have severely restricted them or banned them entirely. YouTube should never have allowed dangerous conspiracy theories to become such a dominant part of the platform’s culture. We may have been hemorrhaging money,” Schaffer continued, “But at least dogs riding skateboards never killed anyone.”

YouTube, Like Facebook, Ignored Toxicity Warnings in Favor of ‘Engagement’

But Google took tighter control of YouTube starting in 2009 and the company wanted users to spend more hours online watching videos. Content creators soon realized that outrage could make that happen. Studies have shown that content that makes people angry is one of the most powerful types of viral content that exists. YouTube employees were well aware that content creators were deliberately manufacturing video content in an effort to game the system as opposed to providing accurate information, but every attempt to create a more responsible algorithm was shot down by more powerful people in the company.

YouTube, meanwhile, continues to deny that the “rabbit hole” effect exists in the first place. In a recent interview with the New York Times, Neal Mohan, YouTube’s Chief Product Officer, claimed that YouTube has no interest in driving people towards more extreme content, and that extreme content does not drive a higher version of engagement or lead to more time on site.

This version of events is impossible to square with Bloomberg’s reporting. While employees attempted to warn the company about what they termed “bad virality,” YouTube was laser-focused on creating a new creator-payment system that would have explicitly monetized engagement, turbocharging the problem. Videos that went viral based on outrageous content would have been favored for profit-sharing even more than they already are.

Bloomberg’s reporting is another example of how Silicon Valley fundamentally lied about the capability, quality, and goals of the algorithms it deploys around us. Efforts to keep the YouTube Kids channel clean by hand-curating content were rejected in favor of algorithmically chosen videos. This led to disaster in 2017, when parents actually started waking up to what their kids were watching. More recently, the company had to crack down on comments after discovering a pedophile ring was abusing content featuring minors. The company has started to mend its ways, with various efforts to promote good news reporting over conspiracy theorizing, but these efforts come only after years of determinedly ignoring the problem.

Continue reading

FTC Files Antitrust Case to Break Up Facebook
FTC Files Antitrust Case to Break Up Facebook

New York Attorney General Letitia James has announced a major antitrust case against Facebook, which will be joined by 47 other state and regional AGs. And that's not all: the Federal Trade Commission (FTC) is filing a separate case against Facebook later today.

Signal, Facebook Spar Over Ads Disclosing What Facebook Knows About You
Signal, Facebook Spar Over Ads Disclosing What Facebook Knows About You

Signal claims Facebook banned it for speaking truth to millions of people. Facebook claims Signal made the whole thing up. Welcome to the internet, where the validity of everything is disputed and everyone is mad about it.

Facebook Announces a New Oculus VR Feature: In-Game Ads
Facebook Announces a New Oculus VR Feature: In-Game Ads

Facebook will soon build ads into your VR games. The company claims the advertising will benefit developers, but it appears to have something else in mind.

Facebook Force-Fed Garbage to 140 Million Americans a Month
Facebook Force-Fed Garbage to 140 Million Americans a Month

Facebook refused and ignored its own staff's attempts to improve the service, even after it knew its own algorithms were feeding people low-quality content they didn't want to see.