New Device Translates Silent Thoughts Into Speech

A new headset could revolutionize communication for locked-in individuals, people with ALS, and anyone who has suffered an injury that makes communication more difficult. In fact, research in this area is proceeding along several different paths. It may not be much longer before real-time speech communication is possible again for people who are now either silent or confined to the use of painfully slow alternatives.
According to MIT Media Lab researcher Arnav Kapur, his new device, the AlterEgo (pictured above), wraps around the neck and detects the micro-movements we make internally with the larynx and vocal cords when we think about speech. This device is not a true telepathic or thought-reading piece of equipment, but it does detect when you’re thinking about speaking without actually doing so.
The video below has more information on the AlterEgo, as well as a demonstration of the hardware in action. It includes a demo of the hardware being used by an individual with amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease.
There’s new technological work being done in this area on multiple fronts. The AlterEgo device is non-invasive, but may also be limited in what it can achieve for this reason; if individuals don’t have enough muscle control or enough muscles, the AlterEgo might not be able to work. Researchers in a different project published a paper in Nature last month, however, detailing a system that translates brain activity directly into speech. This represents a fundamentally different approach from the AlterEgo in a number of respects, not the least of which is using electrodes implanted into the brain as opposed to a wearable strapped under one’s chin.
Still, the collective work being done here is impressive. The brain-stimulation team in Nature writes:
Here we designed a neural decoder that explicitly leverages kinematic and sound representations encoded in human cortical activity to synthesize audible speech. Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement and then transformed these representations into speech acoustics. In closed vocabulary tests, listeners could readily identify and transcribe speech synthesized from cortical activity.
The technological research being done here is early, in all cases. But it certainly seems as though work of this type has the potential to allow people who struggle with slow, eye-tracking or muscle-movement based communication devices to “speak” again. The more valid approaches we can find to deal with this problem, the more people we’ll be able to help who suffer from it.
Continue reading

Voyager 2 Probe Talks to Upgraded NASA Network After 8 Months of Silence
NASA just said "hello" to Voyager 2, and the probe said it back.

Microsoft Deploys Silent Patch to Fix Gaming Performance After April Updates
Microsoft is releasing a Known Issue Rollback (KIR) to address problems with a pair of system updates from earlier this month. The company now confirms that a "small subset" of Windows 10 systems suffered poor game performance after the updates.

The US Military Quietly Tested a Hypersonic Missile Last Month
This comes amid reports that Russia has used its own hypersonic missile system in Ukraine, making it clear the race to faster weapons has already begun.

Biden Admin to Seek Global Ban on Anti-Satellite Missile Tests
The US government has strategic reasons to want ASAT out of the picture, but this is also something that space agencies around the world have been hoping to see.