New Device Translates Silent Thoughts Into Speech

New Device Translates Silent Thoughts Into Speech

A new headset could revolutionize communication for locked-in individuals, people with ALS, and anyone who has suffered an injury that makes communication more difficult. In fact, research in this area is proceeding along several different paths. It may not be much longer before real-time speech communication is possible again for people who are now either silent or confined to the use of painfully slow alternatives.

According to MIT Media Lab researcher Arnav Kapur, his new device, the AlterEgo (pictured above), wraps around the neck and detects the micro-movements we make internally with the larynx and vocal cords when we think about speech. This device is not a true telepathic or thought-reading piece of equipment, but it does detect when you’re thinking about speaking without actually doing so.

The video below has more information on the AlterEgo, as well as a demonstration of the hardware in action. It includes a demo of the hardware being used by an individual with amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease.

There’s new technological work being done in this area on multiple fronts. The AlterEgo device is non-invasive, but may also be limited in what it can achieve for this reason; if individuals don’t have enough muscle control or enough muscles, the AlterEgo might not be able to work. Researchers in a different project published a paper in Nature last month, however, detailing a system that translates brain activity directly into speech. This represents a fundamentally different approach from the AlterEgo in a number of respects, not the least of which is using electrodes implanted into the brain as opposed to a wearable strapped under one’s chin.

Still, the collective work being done here is impressive. The brain-stimulation team in Nature writes:

Here we designed a neural decoder that explicitly leverages kinematic and sound representations encoded in human cortical activity to synthesize audible speech. Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement and then transformed these representations into speech acoustics. In closed vocabulary tests, listeners could readily identify and transcribe speech synthesized from cortical activity.

The technological research being done here is early, in all cases. But it certainly seems as though work of this type has the potential to allow people who struggle with slow, eye-tracking or muscle-movement based communication devices to “speak” again. The more valid approaches we can find to deal with this problem, the more people we’ll be able to help who suffer from it.

Continue reading

Scientists Devise New Way to Treat World’s Most Potent Toxin
Scientists Devise New Way to Treat World’s Most Potent Toxin

Two different teams have devised a new way to treat botulism that could more effectively clear the dangerous toxin from cells and tissues, and it relies on a modified version of the toxin itself. It won't do anything for your crow's feet, though.

Perseverance Rover Spots a Dust Devil on Mars
Perseverance Rover Spots a Dust Devil on Mars

Without even looking for one, Perseverance spotted its first dust devil on Mars.

Windows Timeline Is About to Lose Cross-Device Sync
Windows Timeline Is About to Lose Cross-Device Sync

The company had to circle back just hours after posting the changelog to explain what it's doing with Timeline. While it's not completely removing the feature, it's about to become much less useful.

Windows 10 Now Active on 1.3 Billion Devices, Says Microsoft
Windows 10 Now Active on 1.3 Billion Devices, Says Microsoft

Like a number of other technology firms, Microsoft has the global pandemic to thank for its windfall. It turns out people buy more computers when they're stuck at home.