Scientists Turn Brain Scans Into Intelligible Speech With Neural Network

Stephen Hawking was perhaps the most famous user of “vocoder” speech synthesis hardware, but he was not alone. People all over the world are unable to speak on their own, but science may be approaching a point where they can turn their inner thoughts into speech without tedious typing or clicking. A team from the Neural Acoustic Processing Lab at Columbia University has devised an AI model that can turn brain scans into intelligible speech.
The research combines several advances in machine learning to interpret the patterns of activity in the brain to find out what someone wants to say even if they’re physically unable to make noise. This isn’t a mind-reading machine — the signals come from the auditory cortex where your brain processes speech. So, it can understand real speech and not so-called “imagined speech” that could hold your deepest, darkest secrets.
The technology is still very much a work in progress; more a proof of concept than something you can hook up to your head. The study used neural signals recorded from the surface of the brain during epilepsy surgery, a process called invasive electrocorticography (ECoG). The researchers, led by Nima Mesgarani, used epilepsy patients because they often have to undergo brain surgery that involves neurological testing.

The researchers recorded brain activity while the subjects listened to people recite select words like the numbers zero through nine. This is important because everyone has different brain wave patterns when processing speech. So, Mesgarani and the team trained a neural network that was specific to each patient. They only had 30 minutes of data, which limits the model’s effectiveness. The results are still impressive, though. The team fed in the raw ECoG scans, and the network generated speech with a vocoder. You can listen to a sample of the models here. There are four models, the last of which should be the most realistic.
It’s all a bit robotic, and the first few numbers are tough to make out. However, the team says that about three-quarters of people surveyed were able to understand the vocoder output. To make better neural networks, you need more data. Collecting custom brain wave data from everyone using invasive electrocorticography isn’t exactly practical. One day, we might find some commonality that makes brain wave translation universal like speech recognition. But for now, this is an impressive if impractical first step.
Continue reading

Intel Launches AMD Radeon-Powered CPUs
Intel's new Radeon+Kaby Lake hybrid CPUs are headed for store shelves. Here's how the SKUs break down and what you need to know.

NASA’s OSIRIS-REx Asteroid Sample Is Leaking into Space
NASA reports the probe grabbed so much regolith from the asteroid that it's leaking out of the collector. The team is now working to determine how best to keep the precious cargo from escaping.

Chromebooks Gain Market Share as Education Goes Online
Chromebook sales have exploded in the pandemic, with sales up 90 percent and future growth expected. This poses some challenges to companies like Microsoft.

Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference
Intel's Raja Koduri will speak at a Samsung foundry event this week — and that's not something that would happen if Intel didn't have something to say.