Last year will likely remembered for many things — political turmoil, disastrous hurricanes, and the advent of cheap solar power among them. But the most enduring milestone may be that it was the year software finally became smarter than us. The programs we use are no longer simple servants, faithfully executing our commands. Instead, we’ve entered the era of apps that can program us.
Thanks to recent advances in AI reading comprehension, our phones are vastly more knowledgeable. This wouldn’t necessarily pose a problem, but the phone’s motivations are no longer in line with our own. They’ve become instruments of advertisers — and this has fundamentally altered the dynamic between user and software.
We, as said users of software, have grown lazy. Accustomed to programs doing our bidding, we’ve become complacent in our superiority. Sure, we may have chuckled when the software learned a new trick, like catching misspelled words on the fly, or predicting our next search query. But those programs were fundamentally less intelligent than us, and had no motivations of their own — or so went the reasoning. Neither statement is true any longer. Machine learning has allowed computers to discover strategic behavior that exceeds our own. In-app advertising has fundamentally altered the role of software as servant.
The benefits of possessing such a genie in a gadget may seem enormous, but the dangers that accompany such powers are equally real. And I’m not talking about privacy concerns, which are sufficient to fill entire articles on their own. The real concern is addiction. If you’ve ever been tempted to bring your phone with you to the bathroom and continue the infinity scroll through a social media feed, than you’re probably already aware of this danger. Using software in 2018 requires a different mindset than it did when we sat down in front of our Word documents on a PC. We need an “inoculation guide” to ensure we remain in the driver’s seat of our software.
The first step is realizing we’re reinforcement learners. Our dopamine system is fundamentally tuned to discover rewards within patterns of stimuli. It’s why we find slot machines so addictive: The search for patterns within the spinning wheels activates primal reward circuitry for avoiding cheetahs and cracking open nuts. Unfortunately, try as we might, the equations that govern slot machines are fundamentally more complex than what our brains can puzzle out. This makes them dangerous; when we can’t discover a pattern, we usually don’t just give up. Instead, we get addicted. Like the proverbial deer in the headlights, we become victims of our own biology.
What’s true of slot machines is true of many forms of software as well, most notably social media. The capability of software to deliver intermittent reward signals via “app alerts” and other notifications means it can function exactly like a slot machine, tricking the user into searching for a pattern they’ll never discover. This is a recipe for addiction, and most of us with a phone are already far down this road. It’s the inevitable outcome of software whose profit model is in-app advertising. Here’s an interview with a former Facebook VP attesting the social media giant uses “dopamine-driven feedback-loops:”
Such software defies the so-called principal-agent dynamic we’ve become familiar with when using programs like Word and Excel. Principal-agent dilemmas come from the field of economics, and refer to the difference that can exist between a person’s values and the motivations of someone working on their behalf. If I hire a realtor to sell my house, but that realtor has fundamentally different interests than my own — perhaps preferring a quicker sale at a lower price than what I’d want — than a principal-agent dilemma exists. Previously software was just a tool, like a hammer, and its interests aligned directly with those of the users. But when a piece of software makes money through advertising, it now serves two masters — one being the user, and another being the people who are paying for ad space. The interests of these parties do not always align; you want to quickly drop into Facebook to check your messages, but the Facebook app itself wants you to click on some piece of advertising. Thus, a principal-agent dilemma now exists.
Not a problem, you say; you’re far smarter than those algorithms trying to trick you into falling for some menial click bait. You can tune them out, right? The people who designed the algorithms know this. Like supermarket architects, they realize if they can keep you in the store long enough, you’re bound to slip up and buy something you don’t need. That’s why they put the milk and eggs in the back. The longer distance to walk turns a short trip into a longer trip. Similarly, if software can keep you looking at it long enough, you’re bound to stumble at some point and click on the advertising. It’s a statistical certainty. Addiction is no longer an accident; it’s being engineered. The profit model for in-app advertising makes it all but a certainty.
To make matters worse, the algorithms are getting smarter at an exponential rate, while humans remain more or less the same. Sure, you may consider yourself a genius, but can you learn the game of chess in just four hours, and then demolish the world’s leading chess expert in 100 games without losing a single one? A new kind of AI just did that. The same kind of AI can also beat Go champions, conquer Atari video games, and savage the world’s best poker players. While this type of super-algorithm has yet to be applied to advertising, when it is, and there is every reason to believe it will be since an advertising company is developing it, what chance do you think you’ll have at outsmarting it?
Parents of small children need to be doubly cautious. No self-respecting guardian would sit their infant down in front of a slot machine. Children are especially susceptible to software that triggers the brain’s dopamine reward system. This includes anything that gives intermittent reinforcement signals, including Facebook and even Gmail if one has activated certain notification features. (Facebook does say you must be at least 13 years old to sign up for a regular account, though it’s already just unveiled a Facebook Messenger Kids service for ages 6-12.)
There are two additional steps you can take to inoculate yourself against the trespasses of overly intelligent software. The first is to disable in-app notifications if the program contains this feature. This ensures the app won’t serve up intermittent “alerts” that keep you using it, even when those alerts are mostly meaningless. These intermittent alerts can be patterned to activate your dopamine system and keep you addicted to the software.
Instead, allocate a block of time each day to use the app, and don’t spend a minute longer on it than necessary. The good news is that this will benefit you in other ways: Since there’s a cost to switching tasks, every time you break from a given activity to check a Facebook alert, you have to take a few minutes to find your place again in the work you were doing. These “task switching costs” can add up, and before you know it, the whole day is spent thrashing about between different software alerts. Now’s as good a time as any to eliminate them.
The second thing you can do is pay to go premium whenever possible, and therefore disabling the advertising content in apps you use regularly like Pandora or YouTube. This at least ensures the software is serving you, and not a foreign advertiser. Alternatively, you can use an ad blocker. Just bear in mind such ad-blockers don’t have the same monetary incentives to keep innovating as the companies that do the advertising. Thus, ad-blockers are always fighting a rear-guard action against better-equipped adversaries.
While these recommendations may serve you well, I don’t expect them to be useful for long. An AI recently bested humans at the Stanford reading comprehension test. It may not be long before an AI reads this, takes into account the strategies above, and discovers even more sophisticated ways of manipulating your behavior. The only long-term solution may be to unplug and adopt the strategy of Silicon Valley’s own tech elite: relocate yourself to a remote “off-grid” island to wait out the AI apocalypse.