New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

Artificial intelligence provides vital solutions to some very complex problems but remains a looming threat as we consider its applications on the battlefield. New research details the impending risks and pins the fate of humanity on complex choices we will face in the near future.

A new paper from private intelligence and defense company ASRC Federal and the University of Maryland takes a deep dive into what could happen if we, as a society, choose to employ artificial intelligence in modern warfare. While many aspects of the paper focus on bleak and sobering scenarios that result in the eradication of humanity, it concludes with an understanding that this technology will exist and we can, too, if we make the right choices. Nevertheless, the researchers touch on two important inevitabilities: the development of weaponized artificial intelligence and an impending “normal accident” (more commonly known as a system accident) related to this technology.

Normal accidents, such as the Three Mile Island accident cited in the paper, occur through the implementation of complex technologies we can’t fully understand—at least, not yet. Artificial intelligence may fit the definition better than anything people have ever created. While we understand how AI works, we struggle to explain how it arrives at its conclusions for the same reason we struggle to do the same with people. With so many variables in the system to monitor, let alone analyze, the researchers raise an important concern about creating further abstraction from our tools of war:

An extension of the human-on-the-loop approach is the human-initiated “fire and forget” approach to battlefield AI. Once the velocity and dimensionality of the battlespace increase beyond human comprehension, human involvement will be limited to choosing the temporal and physical bounds of behavior desired in an anticipated context. Depending on the immediacy or unpredictability of the threat, engaging the system manually at the onset of hostilities may be impossible.

While MIT has discovered a method for predicting some AI “errors” in advance and just initiated a 50-year AI accelerator program with the US Air Force, such preparation only mitigates the potential damage we can predict. Just as we failed to preempt the psychological toll of smartphones—a “normal accident” of a different variety—we will fail to predict future damage caused by artificial intelligence. That’s already happening.

New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

Normal accidents do not have to originate from a weakness of human ethics, though it’s often difficult to tell the difference. Regardless of their circumstances, these accidents occur as an unfortunately harsh phenomenon of growth. As the paper states, abstaining from the development of potentially dangerous technologies will not prevent their expansion through other governments and private citizens—malicious or otherwise. While we cannot stop the growth of AI and we cannot prevent the damage it will do, we can follow the paper’s proposal and do our best to mitigate potential harm.

While the researchers only suggest prohibition and regulation of AI, an ethics-based legal foundation can at least provide a framework for managing the problems as they arise. We can also offset the cost by investing our time and resources in machine learning projects that help save lives. Because the paper does not address other uses of artificial intelligence it assesses the potential risks of weaponization in isolation. The human deficiencies that pose the risk we have today may not look the same tomorrow. With the concurrent market growth of AI, bio-implant technologies, and genetic modification, the people of tomorrow may be better equipped to handle the threats we can imagine today.

We can’t safely bet on the unknowns, whether positive or negative, but we can prepare for the evidence-based scenarios we can predict. Although this new research paints an incomplete picture, it encourages us to use the tools currently at our disposal to mitigate the potential harms of weaponized AI. With thoughtful effort, and the willingness to try, we retain hope of weathering the future storm of inevitable change.

Top image credit: Storyblocks

Continue reading

Google CEO Promises to Investigate Exit of Top AI Researcher
Google CEO Promises to Investigate Exit of Top AI Researcher

Google CEO Sundar Pichai has waded into the furor surrounding the termination of AI ethicist Dr. Timnit Gebru, but his memo may not help the situation much.

Security Researcher: ‘solarwinds123’ Password Left Firm Vulnerable in 2019
Security Researcher: ‘solarwinds123’ Password Left Firm Vulnerable in 2019

SolarWinds, the company at the center of the massive hack that hit US government agencies and corporations, doesn't exactly use cutting-edge password techniques.

Researchers Develop Whitest Paint Ever to Combat Climate Change
Researchers Develop Whitest Paint Ever to Combat Climate Change

Aside from being a neat technical feat, the team believes the new white paint could help address climate change by saving loads of power.

Researchers: 2.5 Billion Tyrannosaurus Rexes Walked the Earth
Researchers: 2.5 Billion Tyrannosaurus Rexes Walked the Earth

A new analysis from the University of California Berkeley estimates that there were about 20,000 adult Tyrannosaurs at any given time during the Cretaceous period. Add that up over millions of years, and there could easily have been 2.5 billion of these dinosaurs in total.