Google’s AI Ethics Council Is Collapsing After a Week

Google’s AI Ethics Council Is Collapsing After a Week

Last week, Google announced the formation of an Advanced Technology External Advisory Council (ATEAC). The purpose of ATEAC was to consider the complex challenges that might arise while developing AI technology and to provide diverse perspectives on these issues. Google isn’t the first company to take these sorts of steps, as Microsoft has its own AI advisory board as well.

But all isn’t well with Google’s AI advisory. Less than a week after the announcement, it’s falling apart. On Saturday, behavioral economist and privacy researcher Alessandro Acquisti tweeted:

I'd like to share that I've declined the invitation to the ATEAC council. While I'm devoted to research grappling with key ethical issues of fairness, rights & inclusion in AI, I don't believe this is the right forum for me to engage in this important work. I thank (1/2)

— alessandro acquisti (@ssnstudy) March 30, 2019

On Monday, Google employees began advocating for the removal of another member: Kay Cole James. As president of the Heritage Foundation, James has made a number of statements Google employees view as anti-trans, anti-LGBTQ, and anti-immigrant. Google has been hammered by employee protests over the past year over both its secret effort to resume service in China and its forced arbitration policies. The second issue, at least, has been changed — Google no longer requires employees to submit to forced arbitration. The status of its efforts to build a new, censorship-friendly search engine remains unclear.

According to council member Joanna J. Bryson, Google’s push to include the president of the Heritage Foundation is an attempt to appear socially diverse by including a range of viewpoints on the development of AI.

Yes, I'm worried about that too, but overall I think what I know is more useful than my level of fame is validating. I know that I have pushed them before on some of their associations, and they say they need diversity in order to be convincing to society broadly, e.g. the GOP.

— Joanna J Bryson (@j2bryson) March 26, 2019

The activists concerned about the ways invisible bias can impact the development of algorithms have a very real point. Algorithmic bias is a real problem that a number of companies are struggling with. Serious questions have been raised about the accuracy of software used to predict recidivism rates. Amazon had to scrap a recruiting tool because it was biased against women. Tests have shown that the software packages used in self-driving vehicles properly identify white people as pedestrians significantly more often than people with darker skin tones. Google had to remove gendered pronouns from its Smart Compose feature in Gmail because the software wasn’t able to correctly predict the gender of the person being responded to. Four years ago, Google had to scramble when Google Photos started labeling images of black people as gorillas. And while Amazon’s facial recognition technology proved not to work well on non-white people, the company had no problem trying to sell it to ICE anyway.

These are issues that should concern everyone. Accuracy does not have a political affiliation. Turning over various functions and evaluations to an artificial intelligence is only a positive goal if the end result is better or more fair outcomes for everyone, including the people being evaluated. It is not clear how much power anyone on Google’s AI board will have, nor what kind of weight will be attached to their findings. But people aren’t crazy to be asking serious questions about what kind of testing is being done to ensure that algorithmic bias is dealt with.

The recent Lion Air and Ethiopian Airlines Flight 302 crashes can themselves be considered in terms of algorithmic bias. Last week, the New York Times reported that extensive crash simulations have confirmed that the Lion Air pilots last November had less than 40 seconds to avoid crashing. This would have only been possible had they been trained on precisely which actions to take and followed them to the letter. Instead, the pilots repeatedly attempted to cancel the MCAS system’s override in an incorrect fashion.

In the Lion Air crash, pilots used the thumb switch more than two dozen times to try to override the system. The system kept engaging nonetheless, most likely because of bad readings from a sensor, until the plane crashed into the Java Sea, killing all 189 people on board.

The algorithm, in this case, was biased towards its own reading. The system was not designed to consider whether repeated pilot attempts to cancel the MCAS should be considered as evidence of a failure or malfunction within the MCAS itself.

When human beings and AI come into contact, there’s the potential for unexpected, undesired, and unintended outcomes. No one at Boeing was trying to build an autopilot system that could cause a plane to crash. No computer vision developer wants to build an identification system that fails to recognize pedestrians, regardless of their skin color. But these problems are happening, and the only way to stop them is to acknowledge that.

A quote by Alyssa Foote at Wired sums the situation up extremely well. Foote worked at Facebook as head of global election integrity ops. She writes:

I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

These are issues that everyone should be concerned by. The push to include algorithms and AI at all levels of decision-making, from self-driving cars to loan applications, is already underway. We cannot shirk from the responsibility of ensuring such systems are robust, accurate, and fair, that they measure what they claim to measure, and that they do not contain bugs or flaws that mistakenly harm some groups or people through no fault of their own, regardless of who those groups happen to be.

Continue reading

Intel Launches AMD Radeon-Powered CPUs
Intel Launches AMD Radeon-Powered CPUs

Intel's new Radeon+Kaby Lake hybrid CPUs are headed for store shelves. Here's how the SKUs break down and what you need to know.

NASA Created a Collection of Spooky Space Sounds for Halloween
NASA Created a Collection of Spooky Space Sounds for Halloween

NASA's latest data release turns signals from beyond Earth into spooky sounds that are sure to send a chill up your spine.

EKWB Launches Peltier Cooler Powered by Intel Cryo Cooling Technology
EKWB Launches Peltier Cooler Powered by Intel Cryo Cooling Technology

Intel and EKWB have jointly announced a new waterblock that integrates a Peltier cooler.

Benchmark Results Show Apple M1 Beating Every Intel-Powered MacBook Pro
Benchmark Results Show Apple M1 Beating Every Intel-Powered MacBook Pro

Apple's new M1 SoC can beat every single Intel system it sells, at least in one early benchmark result. We dig into the numbers and the likely competitive situation.