Scientists Turn Brain Scans Into Intelligible Speech With Neural Network

Scientists Turn Brain Scans Into Intelligible Speech With Neural Network

Stephen Hawking was perhaps the most famous user of “vocoder” speech synthesis hardware, but he was not alone. People all over the world are unable to speak on their own, but science may be approaching a point where they can turn their inner thoughts into speech without tedious typing or clicking. A team from the Neural Acoustic Processing Lab at Columbia University has devised an AI model that can turn brain scans into intelligible speech.

The research combines several advances in machine learning to interpret the patterns of activity in the brain to find out what someone wants to say even if they’re physically unable to make noise. This isn’t a mind-reading machine — the signals come from the auditory cortex where your brain processes speech. So, it can understand real speech and not so-called “imagined speech” that could hold your deepest, darkest secrets.

The technology is still very much a work in progress; more a proof of concept than something you can hook up to your head. The study used neural signals recorded from the surface of the brain during epilepsy surgery, a process called invasive electrocorticography (ECoG). The researchers, led by Nima Mesgarani, used epilepsy patients because they often have to undergo brain surgery that involves neurological testing.

Scientists Turn Brain Scans Into Intelligible Speech With Neural Network

The researchers recorded brain activity while the subjects listened to people recite select words like the numbers zero through nine. This is important because everyone has different brain wave patterns when processing speech. So, Mesgarani and the team trained a neural network that was specific to each patient. They only had 30 minutes of data, which limits the model’s effectiveness. The results are still impressive, though. The team fed in the raw ECoG scans, and the network generated speech with a vocoder. You can listen to a sample of the models here. There are four models, the last of which should be the most realistic.

It’s all a bit robotic, and the first few numbers are tough to make out. However, the team says that about three-quarters of people surveyed were able to understand the vocoder output. To make better neural networks, you need more data. Collecting custom brain wave data from everyone using invasive electrocorticography isn’t exactly practical. One day, we might find some commonality that makes brain wave translation universal like speech recognition. But for now, this is an impressive if impractical first step.

Continue reading

Protect Your Online Privacy With the 5 Best VPNs
Protect Your Online Privacy With the 5 Best VPNs

Investing in a VPN is a smart choice right now, but the options are vast. To help narrow things down a bit, we've rounded up five of our very favorite consumer services.

RISC-V Tiptoes Towards Mainstream With SiFive Dev Board, High-Performance CPU
RISC-V Tiptoes Towards Mainstream With SiFive Dev Board, High-Performance CPU

RISC V continues to make inroads across the market, this time with a cheaper and more fully-featured test motherboard.

The PlayStation 5 Will Only Be Available Online for Launch Day
The PlayStation 5 Will Only Be Available Online for Launch Day

The PlayStation 5 isn't going to be available in stores on launch day, and if you want to pick up an M.2 SSD to expand its storage, you'll have some time to figure out that purchase.

ARMing for War: New Cortex-A78C Will Challenge x86 in the Laptop Market
ARMing for War: New Cortex-A78C Will Challenge x86 in the Laptop Market

ARM took another step towards challenging x86 in its own right with the debut of the Cortex-A78C this week. The new chip packs up to eight "big" CPU cores and up to an 8MB L3 cache.