You don’t have to follow up-to-the-minute reports on Silicon Valley to be aware that 2018, thus far, has been anything but smooth for the tech industry as a whole. Tesla and Uber are grappling with fatal self-driving crashes, Facebook is finally facing long-delayed comeuppance for its willingness to throw even the most basic user privacy under a bus in pursuit of revenue, the rise of so-called “deepfakes” suggests video may not be trustworthy much longer. And if we take a jaunt back into 2017, we’ve got arguably the worst security breach in history.
Into this proud moment in history comes Mozilla, with its latest report on the overall health of the internet. It’s a 53-page document that touches on a huge range of topics, from cybersecurity and privacy to the cost of online access and net neutrality. And — spoiler alert — most of these topics aren’t doing all that well.
Some of the topics Mozilla discusses, like the rise of so-called “fake news,” have become intensely polarized in the United States. I’m going to ask that people try to see past that. It may be tempting to retreat into partisan bickering, but it makes any genuine engagement with the underlying problem much more difficult.
Mozilla’s report is designed to focus on five issues: privacy and security, openness, digital inclusion, web literacy, and decentralization. In many cases, these segments show profound challenges and/or negative outcomes.
The challenges of privacy and security will come as no surprise to anyone who reads . Although Mozilla doesn’t mention it, recent attacks like Meltdown and Spectre compromised some of the fundamental methods we use to design CPUs today. The advent of the Internet of Things may have created new opportunities for smart homes (or smart toasters), but it also expanded the attack surface of the internet. The Mirai Botnet is a potent example of how attacks have expanded, and the generally low state of security in the IoT means this particular pain point is going to get worse before it gets better. Mozilla calls for automatic software updates to solve security fixes for IoT products, but this will only take us so far when those updates could be flawed themselves. Ultimately, we have yet to see IoT manufacturers treat security with the seriousness they ought, and until that happens any solution is going to be a half-measure at best.
Mozilla spends a significant amount of time discussing the ways that Russia has created sophisticated operations to sow dissent within a number of countries. This tends to be a hot topic in the US, but again, I’d advise people to decouple their feelings about the 2016 election and look at the larger picture. The problem of fake news has existed since the dawn of time, but the question of how to create social networks that don’t promote fake material over and above factually accurate data is a very real problem — particularly in the wake of recent findings suggesting that not only does fake news spread faster and further than real news, it spreads that way because people enjoy spreading it.
A recent study of how news propagates on Twitter found fake news outperforming the real thing regardless of topic, trends, or distribution patterns. Lies reached 1,500 people six times faster than the truth did, the biggest stories to go viral were all blatantly untrue, and humans spread lies much more often than bots do. Combine the opportunity to make money via viral content with the unprecedented reach the modern internet affords and you’ve got a nasty problem no one has solved.
Some of these effects are magnified by the sheer size of the internet giants today. A handful of companies — Google, Facebook, Amazon, Tencent, Baidu, and Alibaba — control vast social networks that account for the overwhelming majority of time spent online for hundreds of millions of people. The ugly truth is this: If you reached this story by typing “www..com” in your web browser, you’re increasingly rare. Facebook has absorbed a huge chunk of the traffic that used to be directed to sites organically, which means changes to FB’s algorithms control which people see what content.
The tremendous concentration of online time into the hands of a handful of companies means that any changes to how those companies operate ripples across tens of millions of people — if not more. This is one reason why the problem of how falsehoods spread online is so significant. If the same companies that control the largest social networks also have their own internal strategies that amplify falsehood and funnel users towards increasingly extreme content as part of an attempt to boost time spent on site, than that amplification is going to have a much larger impact than it would if people were simultaneously spending time on, say, MySpace, Friendster, Facebook, Bookface, and a half-dozen other networks.
In other news, social networks tend to make people more miserable, everyones’ passwords still collectively suck, governments shut down internet access more every year, the privacy terms websites use are deliberately designed to be misunderstood and/or incomplete, open data sharing is pretty flat, social media applications are now often shut down as well as a way to silence dissent in authoritarian nations, and Chrome dominates the browser market overall on both desktop and mobile (whether this last one is bad will depend on how you feel about Chrome).
Is There Any Good News?
Yes. HTTPS is now used by default in 81 of the Top 100 web destinations, more people get online every year, the global cost of getting online continues to fall, and worldwide support for net neutrality has continued to rise.
But speaking strictly for myself, it’s hard to feel too great about these findings.
I’ve been an IT journalist for 17 years. I entered the field because I wanted to play a part in disseminating factually accurate information in a way that helped people understand the topics at hand, whether that meant hardware, software, or the general impact technology could have on their own lives. If I’ve learned anything since 2001, it’s that simply writing good articles isn’t enough. Engaging with people in good faith often isn’t enough. But what is enough? I don’t know. It often feels as if bad faith and nihilism are winning this particular fight on more fronts than I can count.
How do you explain to people that a grainy YouTube video from a no-name streamer isn’t going to provide a breakthrough analysis of [insert conspiracy theory here] when they’re invested in the idea that it is? How can you even hope to do so, when research continually suggests that people, not bots, gleefully share false content? Over the past few years, it’s become extremely clear that many people either don’t know or don’t care whether the information they’re disseminating is accurate or not.
We see less of that in tech. I don’t know if that’s because IT tends to appeal to readers who are more interested in math and science, and therefore more comfortable with the idea of objective performance measurements (debates around benchmarking tend to revolve more around good versus bad tests, as opposed to whether benchmarking is useful) or if there’s another explanation. Maybe most people simply don’t get as fired up about a CPU as they might about politics, religion, or other topics. But the firewall between the topics we cover at ET and the ones covered by more mainstream press aren’t nearly thick enough for us to avoid these trends altogether.
I think the problems Mozilla highlights are very real. I wish I knew what the solutions were.
For more, watch PCMag’s Fast Forward for a new interview with Mozilla Foundation executive director Mark Surman: