Google is one of the leading technology companies in artificial intelligence, which landed it a juicy government contract last year to work on “Project Maven.” The goal of this project was to process and catalog drone imagery, and Google’s rank-and-file workers were none too pleased. After a series of protests, Google recently announced it would end work on Maven and release guidelines for the use of artificial intelligence. Now, that document is available. Google lists seven core values for its AI research and lists several applications that are off-limits.
We are still in the very early days of useful artificial intelligence, so there aren’t a lot of specifics in Google’s new guidelines. Google’s general objectives for AI include being socially beneficial, avoiding creating or reinforcing unfair bias, being built and tested for safety, being accountable to people, incorporating privacy design principles, upholding high standards of scientific excellence, and being made available for uses that accord with these principles.
So, what does that all that mean? It sounds rather like a fancy way to say “don’t be evil.” Google seems to recognize the massive potential of AI technology, so it wants to start building systems with this framework in mind. Google pledges that it will only work toward socially beneficial AI that is as close as possible to bias-free. We all know what bias can lead to in a neural network — psychopaths. Google also wants to make sure the systems it builds include feedback from people and mechanisms by which people can adjust them over time. Google also intends to give you control over your AI-processed data with appropriate privacy safeguards.
Google intends to make much of its AI research publicly available. Google plans to make technologies and services available in accordance with the above principles, but it can only evaluate the likely uses. It’s possible someone could build on Google’s work to make something that violates one or more of these tenets.
Google says it will not pursue any AI research that involves weapons or any other system that exists primarily to harm people. If there’s a gray area, Google plans to only develop a technology where the possible benefits far outweigh the risks. So, Google won’t build smart weapons for the government. However, the company says it will continue working with the US government and military on other technologies. For example, cybersecurity, healthcare, and search and rescue.
If Project Maven has any long-term benefit, it may be that it forced Google to go on the record about how it will use AI. It’s up to consumers (and Google employees) to make sure the company adheres to these promises.
Turing Robotics Files for Bankruptcy Without Ever Delivering a Phone
It's increasingly unlikely that it ever will now that TRI has filed for bankruptcy in Finland, where it was set to manufacture the device.
Perrone Robotics: We’ll Make Cars, Vacuums, Mining Trucks Self-Driving
No-longer-startup PRI sees scalable autonomous platforms. Even Raspberry Pi is enough for simple autonomy. Vertebrate Lab learnings may change how we think about autonomous devices.
IBM Ships Robotic Head to the International Space Station
IBM has developed an 11-pound AI-infused robotic head named Cimon, and it's betting it can be an astronaut assistant, thanks to baked in learning capabilities.
ET Deals Roundup: Amazon Top-Rated Robot Vacuum, $30 Portable Jump-Starter, and more
For a limited time, you can grab a six-core Inspiron 5680 gaming tower with a GTX 1060 for just $670. You can also save on Fire TV streaming devices, 1TB SSDs, and BB-8 interactive droids.