Google Researchers Just Made Computers Sound Much More Like People

Google Researchers Just Made Computers Sound Much More Like People

A team of researchers at Google has found a way to dramatically improve computer-generated speech, substantially improving its cadence and intonation. It’s a step towards the kind of sophisticated speech synthesis that has, to date, existed entirely within the realm of science fiction.

Computers, even when they speak, do not sound human. Even in science fiction, where such constraints need not exist, computers, androids, and robots commonly use stilted grammar, inaccurate pronunciation, or speak in harsh, mechanical tones. In TV shows and movies where artificial lifeforms speak naturally (the advanced Cylon models in the 2004 Battlestar Galactica reboot, for example), this capability is often used to play up why the artificial life forms represent a threat. The ability to speak naturally is often treated as a vital component of humanity. Mechanical life forms in Star Trek: The Next Generation and its various spin-offs almost always speak with mannerisms intended to convey their artificiality, even when their intentions are perfectly benign.

In the real world, programs like Dr. Sbaitso were often the first introduction computer users had to text-to-speech technology. You can hear what Creative Labs’ text-to-speech technology sounded like below, circa 1990.

Modern technology has dramatically improved on this, but technologies like Alexa, Cortana, Google Assistant, or Siri would never be mistaken for a human save in very specific cases. A significant part of the reason why we can tell when a computer is speaking versus an individual is because of the (mis)use of prosody. Prosody is defined as the pattern of intonation, tone, rhythm, and stress within a language.

There’s an old joke about the importance of commas that compares two simple sentences to make its point: “It’s time to eat Grandma” conveys a rather different meaning than “It’s time to eat, Grandma.” In this case, the comma is used to convey information about how the sentence should be pronounced and interpreted. Not all prosodic information is encoded via grammar, however, and teaching computers how to interpret and use this data has been a major stumbling block. Now, researchers across multiple Google teams have found a way to encode prosody information into the Tacotron text-to-speech (TTS) system.

Google Researchers Just Made Computers Sound Much More Like People

We can’t embed Google’s speech samples directly, unfortunately, but it’s worth visiting the page to hear how the new information impacts pronunciation and diction. Here’s how Google describes this work:

We augment the Tacotron architecture with an additional prosody encoder that computes a low-dimensional embedding from a clip of human speech (the reference audio). This embedding captures characteristics of the audio that are independent of phonetic information and idiosyncratic speaker traits — these are attributes like stress, intonation, and timing. At inference time, we can use this embedding to perform prosody transfer, generating speech in the voice of a completely different speaker, but exhibiting the prosody of the reference. The embedding can also transfer fine time-aligned prosody from one phrase to a slightly different phrase, though this technique works best when the reference and target phrases are similar in length and structure.

There are samples and clips you can play to see how Tacotron handles various tasks. The researchers note they can transfer prosody even when the reference audio uses an accent not in Tacotron’s training data. And even more importantly, they’ve found a way to model what they call latent “factors” of speech, allowing for the prosody within any speech clip to be represented without requiring a reference audio clip. This expanded model can force Tacotron to use specific speaking styles to make various statements sound happy, angry, or sad.

None of the clips sound completely human — there’s still a degree of artificiality to the underlying presentation — but they’re a substantial improvement on what’s come before. Maybe the next Elder Scrolls game won’t have to feature the same eight voice actors in approximately 40,000 different roles.

Continue reading

Musk: Tesla Was a Month From Bankruptcy During Model 3 Ramp-Up
Musk: Tesla Was a Month From Bankruptcy During Model 3 Ramp-Up

The Model 3 almost spelled doom for Tesla, but the same vehicle also probably saved it.

How Does Windows Use Multiple CPU Cores?
How Does Windows Use Multiple CPU Cores?

We take multi-core awareness for granted these days, but how do the CPU and operating system communicate with each other in the first place?

Elon Musk: SpaceX Will Send People to Mars in 4 to 6 Years
Elon Musk: SpaceX Will Send People to Mars in 4 to 6 Years

SpaceX and Tesla CEO Elon Musk likes to make bold claims. Sometimes he comes through, and we end up with a reusable Falcon 9 rocket, but Musk also has a tendency to get carried away, particularly when it comes to Mars. The SpaceX CEO has long promised a Mars colony on an aggressive, and some…

Microsoft Adds 64-bit x86 Emulation to Windows on ARM
Microsoft Adds 64-bit x86 Emulation to Windows on ARM

Microsoft announced today that the expected support for 64-bit x86 emulation on Windows on ARM devices has arrived, provided you are running Build 21277. You'll need to be part of Microsoft's Windows Insider program to test the build.