Google’s Smart Compose feature, should you try to enable it, will attempt to predict what you’re about to say next in an email based on what you’ve previously typed. One thing the feature doesn’t do, however, is attempt to guess the proper pronoun of address for the person you are writing to.
According to a new Reuters story, “Google’s technology will not suggest gender-based pronouns because the risk is too high that its ‘Smart Compose’ technology might predict someone’s sex or gender identity incorrectly and offend users.” This is a smart move, and we applaud Google for both making it and being willing to acknowledge why it did so. Simply put, the company has yet to find a way to build an AI that can accurately determine the gender of the person being emailed. The company discovered the problem in January when Gmail product manager Paul Lambert typed “I am meeting an investor next week,” and Smart Compose suggested “Do you want to meet him?” as a potential next response when the individual in question was actually a woman.
The article notes that Google tried several workarounds, but has been unable to find a machine learning solution thus far that cleanly prevents the issue. The issue arises in part because the data sets used to train AI for Natural Language Generation are based on billions of sentences — sentences that encode expectations about the sorts of assumptions the AI should make. If the overwhelming majority of doctors, businesspeople, and investors referenced in an NLG data set are male, the AI is going to learn that the proper pronoun to attach to that profession is male. The problem, of course, is figuring out how to teach an AI to recognize exceptions to the rule.
All of this is particularly important when you consider what Smart Compose is supposed to actually do. It’s intended to accelerate the process of responding to an email by suggesting accurate, helpful clauses or phrases. If the author has to spend time ensuring they haven’t insulted someone by misgendering them, you’ve nuked the value of the tool. While people have put up with computer-inserted word fails for years in the form of Autocorrect, the sheer incongruity of an AC insertion often serves as an immediate notification that the phone “helped” in that specific instance. Simply referring to someone by the wrong gender in an email doesn’t come across as an accidental, computer-inserted event. It just makes the author seem careless, at best.
Earlier this year, researchers actually ran a study on how people react to being forgotten by other people, covering a range of actions from a missed mutual appointment to forgetting someone’s name, shared experiences, or class year. From The Atlantic:
Ray and his team were surprised by how consistently damaging all this forgetting was. Statistical analyses of both the students’ reports and a follow-up, controlled study found that people who were forgotten felt less close to those who had forgotten them, regardless of whether the forgetter was a family member or someone they’d just met. Mercifully, the people who were forgotten were almost always eager to excuse the memory lapses: The university students, for instance, would explain away potential slights with comments like “she already met too many people in the last couple of days.” But such rationalizations only softened the blow in the end. “The good news is that this happens a lot, and people will try their best to be forgiving,” Ray says. “The bad news is that, on average, they can’t quite get there.”
These results, published in the Journal of Personality and Social Psychology, suggest that forgetting someone does indeed send the message everyone seems to fear it does: You simply weren’t interested or invested in that person enough to remember things about them. The impression might be inescapable. “It’s such a big deal to admit that you don’t remember a person,” says Laura King, a psychologist at the University of Missouri who has separately studied the social consequences of forgetting. “It’s an insult, even though it’s completely innocent and we have absolutely no desire to hurt the person’s feelings. You just told that person they’re a zero.”
Google’s decision to be honest about the shortcomings of its own algorithm and to acknowledge that current algorithms can be created in a biased way, even when this is not the goal, is critically important in its own right. Silicon Valley corporations have often been all too willing to hide the limits of their own algorithms and to pretend they lead solely to good outcomes for all involved. Nobody wants to insult someone else in a business email or work contact because an AI tool accidentally inserted the wrong pronoun. Nobody wants to look bad because a stupid AI tool intended to make writing email easier actually inserted a mistake someone else found offensive. By being honest about this issue and choosing not to deploy it, Google has improved the chances that whatever solution it eventually finds (if it finds one) will be trusted.