Google’s AI Manifesto: Accountability, Privacy, and No Killer Robots
Google is one of the leading technology companies in artificial intelligence, which landed it a juicy government contract last year to work on “Project Maven.” The goal of this project was to process and catalog drone imagery, and Google’s rank-and-file workers were none too pleased. After a series of protests, Google recently announced it would end work on Maven and release guidelines for the use of artificial intelligence. Now, that document is available. Google lists seven core values for its AI research and lists several applications that are off-limits.
We are still in the very early days of useful artificial intelligence, so there aren’t a lot of specifics in Google’s new guidelines. Google’s general objectives for AI include being socially beneficial, avoiding creating or reinforcing unfair bias, being built and tested for safety, being accountable to people, incorporating privacy design principles, upholding high standards of scientific excellence, and being made available for uses that accord with these principles.
So, what does that all that mean? It sounds rather like a fancy way to say “don’t be evil.” Google seems to recognize the massive potential of AI technology, so it wants to start building systems with this framework in mind. Google pledges that it will only work toward socially beneficial AI that is as close as possible to bias-free. We all know what bias can lead to in a neural network — psychopaths. Google also wants to make sure the systems it builds include feedback from people and mechanisms by which people can adjust them over time. Google also intends to give you control over your AI-processed data with appropriate privacy safeguards.
Google intends to make much of its AI research publicly available. Google plans to make technologies and services available in accordance with the above principles, but it can only evaluate the likely uses. It’s possible someone could build on Google’s work to make something that violates one or more of these tenets.
Google says it will not pursue any AI research that involves weapons or any other system that exists primarily to harm people. If there’s a gray area, Google plans to only develop a technology where the possible benefits far outweigh the risks. So, Google won’t build smart weapons for the government. However, the company says it will continue working with the US government and military on other technologies. For example, cybersecurity, healthcare, and search and rescue.
If Project Maven has any long-term benefit, it may be that it forced Google to go on the record about how it will use AI. It’s up to consumers (and Google employees) to make sure the company adheres to these promises.
Continue reading
Protect Your Online Privacy With the 5 Best VPNs
Investing in a VPN is a smart choice right now, but the options are vast. To help narrow things down a bit, we've rounded up five of our very favorite consumer services.
States Claim Google’s ‘Privacy Sandbox’ Violates Antitrust Law
Google finds itself in an impossible position. Privacy advocates have long demanded Google follow Microsoft and Mozilla's lead in purging tracking cookies from Chrome. Now that it's doing so, state attorneys general have filed an amended antitrust complaint that uses the so-called "Privacy Sandbox" as ammunition against the company.
Apple AirTags, Now Jailbroken, Could Become Even Bigger Privacy Nightmare
The new Apple AirTag is not the first smart tracker, but it's so good at what it does that it could actually be a privacy nightmare, an even greater concern after a security researcher has shown it's possible to "jailbreak" one.
Facebook Announces Metaverse Losses, Complains About Apple’s Privacy Changes
The company's Q4 earnings call lets us see what the VR side of the company's finances are for the first time, and they aren't pretty.