YouTube, Like Facebook, Ignored Toxicity Warnings in Favor of ‘Engagement’
Image credit: Chris McGrath/Getty Images
Since the 2016 election, social media companies have faced the beginnings of a reckoning for their complete failure to protect the security or privacy of their users and to protect their platforms from being exploited by foreign actors, scam artists, and conspiracy theorists. Facebook has been hammered with scandal after scandal, but YouTube has scarcely come away unscathed. Investigations have found troubling links between the company’s algorithms for determining what shows up next in your video feed and the rise of extremist content online. The company has been particularly criticized for its failure to properly monitor children’s programming.
A new investigation from Bloomberg shines a light on the situation and confirms that YouTube executives and employees have been aware of these problems for years. Every attempt to actually fix these issues by employees was shot down by the executives themselves. Why? Because any attempt to address the issues could harm user engagement.
“Scores” of people within YouTube and Google attempted to raise the alarm about the huge amount of lies, deceit, and toxic content proliferating across the platform, the report said. A number of proposals were drawn up to address these issues. All were denied.
The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.
YouTube has only recently begun to crack down on individuals who spread false content. It recently announced it would de-monetize anti-vaccination videos as a way to limit the spread of disinformation concerning the effectiveness and safety of vaccines. According to Micah Schaffer, who began working at YouTube in 2006 and stayed for several years, the site now permits content it never would have allowed before. Regarding the spread of anti-vaccination content on the site, Schaffer said: “We would have severely restricted them or banned them entirely. YouTube should never have allowed dangerous conspiracy theories to become such a dominant part of the platform’s culture. We may have been hemorrhaging money,” Schaffer continued, “But at least dogs riding skateboards never killed anyone.”
But Google took tighter control of YouTube starting in 2009 and the company wanted users to spend more hours online watching videos. Content creators soon realized that outrage could make that happen. Studies have shown that content that makes people angry is one of the most powerful types of viral content that exists. YouTube employees were well aware that content creators were deliberately manufacturing video content in an effort to game the system as opposed to providing accurate information, but every attempt to create a more responsible algorithm was shot down by more powerful people in the company.
YouTube, meanwhile, continues to deny that the “rabbit hole” effect exists in the first place. In a recent interview with the New York Times, Neal Mohan, YouTube’s Chief Product Officer, claimed that YouTube has no interest in driving people towards more extreme content, and that extreme content does not drive a higher version of engagement or lead to more time on site.
This version of events is impossible to square with Bloomberg’s reporting. While employees attempted to warn the company about what they termed “bad virality,” YouTube was laser-focused on creating a new creator-payment system that would have explicitly monetized engagement, turbocharging the problem. Videos that went viral based on outrageous content would have been favored for profit-sharing even more than they already are.
Bloomberg’s reporting is another example of how Silicon Valley fundamentally lied about the capability, quality, and goals of the algorithms it deploys around us. Efforts to keep the YouTube Kids channel clean by hand-curating content were rejected in favor of algorithmically chosen videos. This led to disaster in 2017, when parents actually started waking up to what their kids were watching. More recently, the company had to crack down on comments after discovering a pedophile ring was abusing content featuring minors. The company has started to mend its ways, with various efforts to promote good news reporting over conspiracy theorizing, but these efforts come only after years of determinedly ignoring the problem.
Continue reading
Chromebooks Gain Market Share as Education Goes Online
Chromebook sales have exploded in the pandemic, with sales up 90 percent and future growth expected. This poses some challenges to companies like Microsoft.
SpaceX Launches ‘Better Than Nothing’ Starlink Beta
Those lucky few who have gotten invitations to try the service will have to pay a hefty up-front cost, and the speeds aren't amazing. Still, it's a new generation of satellite internet.
AMD Buys FPGA developer Xilinx in $35 Billion Deal
The deal, which we discussed earlier this month, will give AMD access to new markets that it hasn't previously played in, including FPGAs and artificial intelligence.
Protect Your Online Privacy With the 5 Best VPNs
Investing in a VPN is a smart choice right now, but the options are vast. To help narrow things down a bit, we've rounded up five of our very favorite consumer services.