Google’s AI Manifesto: Accountability, Privacy, and No Killer Robots

Google’s AI Manifesto: Accountability, Privacy, and No Killer Robots

Google is one of the leading technology companies in artificial intelligence, which landed it a juicy government contract last year to work on “Project Maven.” The goal of this project was to process and catalog drone imagery, and Google’s rank-and-file workers were none too pleased. After a series of protests, Google recently announced it would end work on Maven and release guidelines for the use of artificial intelligence. Now, that document is available. Google lists seven core values for its AI research and lists several applications that are off-limits.

We are still in the very early days of useful artificial intelligence, so there aren’t a lot of specifics in Google’s new guidelines. Google’s general objectives for AI include being socially beneficial, avoiding creating or reinforcing unfair bias, being built and tested for safety, being accountable to people, incorporating privacy design principles, upholding high standards of scientific excellence, and being made available for uses that accord with these principles.

So, what does that all that mean? It sounds rather like a fancy way to say “don’t be evil.” Google seems to recognize the massive potential of AI technology, so it wants to start building systems with this framework in mind. Google pledges that it will only work toward socially beneficial AI that is as close as possible to bias-free. We all know what bias can lead to in a neural network — psychopaths. Google also wants to make sure the systems it builds include feedback from people and mechanisms by which people can adjust them over time. Google also intends to give you control over your AI-processed data with appropriate privacy safeguards.

Google CEO Sundar Pichai demos the Duplex AI phone assistant.
Google CEO Sundar Pichai demos the Duplex AI phone assistant.

Google intends to make much of its AI research publicly available. Google plans to make technologies and services available in accordance with the above principles, but it can only evaluate the likely uses. It’s possible someone could build on Google’s work to make something that violates one or more of these tenets.

Google says it will not pursue any AI research that involves weapons or any other system that exists primarily to harm people. If there’s a gray area, Google plans to only develop a technology where the possible benefits far outweigh the risks. So, Google won’t build smart weapons for the government. However, the company says it will continue working with the US government and military on other technologies. For example, cybersecurity, healthcare, and search and rescue.

If Project Maven has any long-term benefit, it may be that it forced Google to go on the record about how it will use AI. It’s up to consumers (and Google employees) to make sure the company adheres to these promises.

Continue reading

Apple Hires Away Google’s Head of AI

Apple is gearing up for a fight with several high-profile AI hires. Its latest move is to poach John Giannandrea, Google's head of AI.

Google’s Pixel 3 Makes an Appearance in Open Source Code

As is tradition, the first hints of Google's next flagship smartphone have appeared in the Android open source code. Specifically, Google is optimizing code for a device called "Pixel 3." It doesn't get much clearer than that.

Google’s Cloud TPU Matches Volta in Machine Learning at Much Lower Prices

Google and Nvidia both offer competitive machine learning products — but Google is beating Nvidia on costs, at least in certain tests.

Google’s Duplex AI Demo Just Passed the Turing Test

Google Duplex isn't just an impressive bit of technology — it's arguably evidence that Google has passed the Turing test, at least in this specific instance.