When one considers the types of equipment shipped up to the International Space Station, there’s a certain list of default equipment that makes sense. Spare parts? Check. Food and medical supplies? Check. Some manner of entertainment options? Check. Eleven-pound robot head? Not so much. But IBM has visions of how such hardware might be useful, and it’s developed such a device — the Crew Interactive Mobile Companion (CIMON for short).
Meet Wilson, Watson, Cimon
Cimon is technically powered by what IBM calls “Watson” technology, but it has a unique mission on the ISS. It will work with German astronaut Alexander Gerst to run some crystal experiments, solve a Rubik’s Cube, and conduct a “complex medical experiment” using Cimon as a flying camera.
All of this sounds fairly rote, but some of the other functions are more interesting. Cimon is also intended to serve as a colleague to on-board astronauts, including working through prescribed checklists in coordination with its “teammates.” There’s also talk of Cimon being able to serve as a safety-improvement by giving warnings of impending failures before astronauts might see them on a control board.
Here’s how IBM describes Cimon’s ability to learn:
AI gives the space assistant text, speech and image processing capabilities, as well as the ability to retrieve specific information and findings. These skills, which can be trained individually and deepened in the context of a given assignment, are developed based on the principle of understanding – reasoning – learning.
Watson speech and vision technologies helped train CIMON to recognize Alexander Gerst, using voice samples and Gerst, as well as “non-Gerst” images. It also used the Watson Visual Recognition service to learn the construction plans of the Columbus module on the International Space Station to be able to easily move around. CIMON also learned all the procedures to help carrying out the on-board experiments. Experiments sometimes consist of more than 100 different steps, CIMON knows them all.
Cimon is a long way off from demonstrating what experts refer to as “strong” AI — progress in that field hasn’t really budged, despite the widespread adoption of AI as a marketing term — but there’s real potential for advance here, we think.
One area where AI advances could truly revolutionize human capabilities is in the exploration of the solar system. Human exploration of objects beyond the moon is a difficult problem for many reasons, but the need to protect and preserve human life across an interplanetary journey ranging from months to years is one of the most significant issues. Most of NASA’s greatest exploration breakthroughs have been delivered by satellites and rovers controlled from Earth, but these systems have clear limits. Communications have to traverse at least three “hops,” first from the vehicle to Earth, then from Earth to the vehicle, and then from the vehicle to Earth again before NASA or one of the other space agencies knows that a specific action has completed successfully.
The idea of a rover or satellite as intelligent as, say, R2-D2, is still entirely within the realm of science fiction. But it’s not crazy to imagine a successor to Curiosity that’s equipped with the sensors and tools it needs to reach its own conclusions about which rock faces might be optimal for drilling, or which patches of ground might yield revealing soil samples. Initially, of course, these conclusions would be checked and triple-checked by scientists on the ground. But over time, as the probe proved itself, it might be possible to support the mission with fewer personnel, allowing for other projects to be pursued.
Meanwhile, the inevitable adjustments made to each probe once it began its exploration could be folded back into improvements to future probes. While some improvements would obviously be mission-specific, changes to instrument data integration or core adjustments to boost reliability and stability over the long term could drive a virtuous cycle over time.
That’s an awful lot of weight to put on the initial deployment of a single robot head, but it’s not a crazy idea. Maybe in 15-20 years we’ll be fielding space probes with self-directed exploration capabilities, accompanied by modest human oversight.
LG Shifts Strategy, Will No Longer Release Yearly Handset Updates
LG has declared its out of the yearly flagship phone release cycle, preferring to concentrate on retaining form factors for a longer period of time.
Samsung Galaxy S9 Release, Ship Dates Leaked
Serial phone leaker Evan "evleaks" Blass has posted the announcement, pre-order, and release dates for the Galaxy S9.
MIT Researchers Create Color-Shifting Ink for 3D Printers
A new printing technology designed by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has the potential to add significantly more colors to 3D printing without the need for additional print heads.
Most Android Security Scares Are Bullshit
Many of the Android malware stories we see making the rounds end up amounting to nothing because of the way the platform operates these days. While Android malware is definitely out there, you usually don't need to panic.