You pick up things so frequently throughout the day that the act seems simple. However, it’s just the end result of a network of nerves, tendons, and muscles that you’ve honed all your life. Making a robot that can pick things up with the same reliability has proven difficult, and even small changes can make a carefully designed robot hand all thumbs. A company called OpenAI says it has developed a robot hand that grips objects in a more human-like way, and it didn’t have to be taught by humans — it learned all on its own.
For your entire life, your brain has been learning how to pick up different objects. On a conscious level, there’s no difference between picking up a wooden block or an apple. You just do it. Translating human movements into a machine would be unnecessarily complicated. So, OpenAI decided to skip the human element altogether. They let a robot hand try and fail over and over in a simulation until it slowly learned how to pick up various objects.
The simulated robot hand didn’t have to operate in real time, so researchers were able to simulate about 100 years of trial and error in about 50 hours. It took some serious computing hardware to make that happen: 6,144 CPUs and 8 GPUs powered the learning phase. OpenAI calls this system Dactyl, and it’s moved beyond the simulation.
With Dactyl turned loose on a physical robot hand, it’s capable of remarkably human-like movements. Something we take for granted, like spinning an object around to look at the other side, is tedious for most robots. Dactyl can do it with ease, but it has advanced hardware to help. The Shadow Dexterous Hand has 24 degrees of freedom compared with seven for most robot arms. The robot knows the position of each finger, and there’s a feed of three camera angles to help it orient the object.
Importantly, this system isn’t stuck with a single type of object. It can grip and manipulate anything that fits in its hand. This is called “generalization,” and it’s an essential aspect of robotics as we integrate machines into our lives. You don’t want to have to train a robot to do every single thing it might need to do in a day. Ideally, it should be able to figure something out if it’s similar to a task it’s already performed. For example, if your robot butler can pour your orange juice in the morning, it should be able to pour you a scotch in the evening without learning precisely how to do both.
Dactyl isn’t going to pour you any drinks quite yet, but maybe someday.
Nvidia Created a Face Swapper for Pets That Learns From Just a Few Examples
Most AI that manipulates or morphs images requires a large amount of training data to serve as a foundation for its abilities. Nvidia found a way to train a model with only one input image of a dog and a single example of another animal.