New Device Translates Silent Thoughts Into Speech

A new headset could revolutionize communication for locked-in individuals, people with ALS, and anyone who has suffered an injury that makes communication more difficult. In fact, research in this area is proceeding along several different paths. It may not be much longer before real-time speech communication is possible again for people who are now either silent or confined to the use of painfully slow alternatives.
According to MIT Media Lab researcher Arnav Kapur, his new device, the AlterEgo (pictured above), wraps around the neck and detects the micro-movements we make internally with the larynx and vocal cords when we think about speech. This device is not a true telepathic or thought-reading piece of equipment, but it does detect when you’re thinking about speaking without actually doing so.
The video below has more information on the AlterEgo, as well as a demonstration of the hardware in action. It includes a demo of the hardware being used by an individual with amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease.
There’s new technological work being done in this area on multiple fronts. The AlterEgo device is non-invasive, but may also be limited in what it can achieve for this reason; if individuals don’t have enough muscle control or enough muscles, the AlterEgo might not be able to work. Researchers in a different project published a paper in Nature last month, however, detailing a system that translates brain activity directly into speech. This represents a fundamentally different approach from the AlterEgo in a number of respects, not the least of which is using electrodes implanted into the brain as opposed to a wearable strapped under one’s chin.
Still, the collective work being done here is impressive. The brain-stimulation team in Nature writes:
Here we designed a neural decoder that explicitly leverages kinematic and sound representations encoded in human cortical activity to synthesize audible speech. Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement and then transformed these representations into speech acoustics. In closed vocabulary tests, listeners could readily identify and transcribe speech synthesized from cortical activity.
The technological research being done here is early, in all cases. But it certainly seems as though work of this type has the potential to allow people who struggle with slow, eye-tracking or muscle-movement based communication devices to “speak” again. The more valid approaches we can find to deal with this problem, the more people we’ll be able to help who suffer from it.
Continue reading

Starlink Beta Speed Tests Put Traditional Satellite Internet to Shame
According to data from Ookla Speedtest and analyzed by our colleagues at PCMag, Starlink is living up to its lofty speed claims.

Western Digital Changes Its Reported Drive Speeds to Reflect Reality
Western Digital has launched new WD Red Plus models to correct previous communicated inaccuracies regarding the spindle speeds on its 8TB-14TB products in this family.

How to Boost Your Wi-Fi Speed by Choosing the Right Channel
Some channels in Wi-Fi routers are indeed much faster — but that doesn't mean you should go ahead and change them. Read on to find out more about interference and the massive difference between 2.4GHz and 5GHz Wi-Fi.

Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed
Samsung has developed a new type of processor-in-memory, built around HBM2. It's a new achievement for AI offloading and could boost performance by up to 2x while cutting power consumption 71 percent.