New Device Translates Silent Thoughts Into Speech

New Device Translates Silent Thoughts Into Speech

A new headset could revolutionize communication for locked-in individuals, people with ALS, and anyone who has suffered an injury that makes communication more difficult. In fact, research in this area is proceeding along several different paths. It may not be much longer before real-time speech communication is possible again for people who are now either silent or confined to the use of painfully slow alternatives.

According to MIT Media Lab researcher Arnav Kapur, his new device, the AlterEgo (pictured above), wraps around the neck and detects the micro-movements we make internally with the larynx and vocal cords when we think about speech. This device is not a true telepathic or thought-reading piece of equipment, but it does detect when you’re thinking about speaking without actually doing so.

The video below has more information on the AlterEgo, as well as a demonstration of the hardware in action. It includes a demo of the hardware being used by an individual with amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease.

There’s new technological work being done in this area on multiple fronts. The AlterEgo device is non-invasive, but may also be limited in what it can achieve for this reason; if individuals don’t have enough muscle control or enough muscles, the AlterEgo might not be able to work. Researchers in a different project published a paper in Nature last month, however, detailing a system that translates brain activity directly into speech. This represents a fundamentally different approach from the AlterEgo in a number of respects, not the least of which is using electrodes implanted into the brain as opposed to a wearable strapped under one’s chin.

Still, the collective work being done here is impressive. The brain-stimulation team in Nature writes:

Here we designed a neural decoder that explicitly leverages kinematic and sound representations encoded in human cortical activity to synthesize audible speech. Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement and then transformed these representations into speech acoustics. In closed vocabulary tests, listeners could readily identify and transcribe speech synthesized from cortical activity.

The technological research being done here is early, in all cases. But it certainly seems as though work of this type has the potential to allow people who struggle with slow, eye-tracking or muscle-movement based communication devices to “speak” again. The more valid approaches we can find to deal with this problem, the more people we’ll be able to help who suffer from it.

Continue reading

Cyberpunk 2077 Is Coming Back to PlayStation Store, Even Though It’s Still Busted
Cyberpunk 2077 Is Coming Back to PlayStation Store, Even Though It’s Still Busted

Developer CD Projekt Red announced this week that the game will return to the PlayStation Store after being pulled more than six months ago. If you were hoping for vastly improved performance, don't hold your breath.

Apple Watch Better At Finding Arrhythmias Than We Thought
Apple Watch Better At Finding Arrhythmias Than We Thought

Sometimes it's not about whether you find what you were looking for. Sometimes it's about what shows up unbidden in the data.

SSD, HDD Reliability More Similar Than Thought
SSD, HDD Reliability More Similar Than Thought

Backblaze is setting the record straight on HDD versus SSD reliability.

Google Confirms the Pixel 6 Doesn’t Charge As Fast As We Thought
Google Confirms the Pixel 6 Doesn’t Charge As Fast As We Thought

Google has clarified how the latest Pixels charge, and the true maximum rate is not the full 30W supported by the new charger.