New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

Artificial intelligence provides vital solutions to some very complex problems but remains a looming threat as we consider its applications on the battlefield. New research details the impending risks and pins the fate of humanity on complex choices we will face in the near future.

A new paper from private intelligence and defense company ASRC Federal and the University of Maryland takes a deep dive into what could happen if we, as a society, choose to employ artificial intelligence in modern warfare. While many aspects of the paper focus on bleak and sobering scenarios that result in the eradication of humanity, it concludes with an understanding that this technology will exist and we can, too, if we make the right choices. Nevertheless, the researchers touch on two important inevitabilities: the development of weaponized artificial intelligence and an impending “normal accident” (more commonly known as a system accident) related to this technology.

Normal accidents, such as the Three Mile Island accident cited in the paper, occur through the implementation of complex technologies we can’t fully understand—at least, not yet. Artificial intelligence may fit the definition better than anything people have ever created. While we understand how AI works, we struggle to explain how it arrives at its conclusions for the same reason we struggle to do the same with people. With so many variables in the system to monitor, let alone analyze, the researchers raise an important concern about creating further abstraction from our tools of war:

An extension of the human-on-the-loop approach is the human-initiated “fire and forget” approach to battlefield AI. Once the velocity and dimensionality of the battlespace increase beyond human comprehension, human involvement will be limited to choosing the temporal and physical bounds of behavior desired in an anticipated context. Depending on the immediacy or unpredictability of the threat, engaging the system manually at the onset of hostilities may be impossible.

While MIT has discovered a method for predicting some AI “errors” in advance and just initiated a 50-year AI accelerator program with the US Air Force, such preparation only mitigates the potential damage we can predict. Just as we failed to preempt the psychological toll of smartphones—a “normal accident” of a different variety—we will fail to predict future damage caused by artificial intelligence. That’s already happening.

New Research Warns of ‘Normal Accident’ From AI in Modern Warfare

Normal accidents do not have to originate from a weakness of human ethics, though it’s often difficult to tell the difference. Regardless of their circumstances, these accidents occur as an unfortunately harsh phenomenon of growth. As the paper states, abstaining from the development of potentially dangerous technologies will not prevent their expansion through other governments and private citizens—malicious or otherwise. While we cannot stop the growth of AI and we cannot prevent the damage it will do, we can follow the paper’s proposal and do our best to mitigate potential harm.

While the researchers only suggest prohibition and regulation of AI, an ethics-based legal foundation can at least provide a framework for managing the problems as they arise. We can also offset the cost by investing our time and resources in machine learning projects that help save lives. Because the paper does not address other uses of artificial intelligence it assesses the potential risks of weaponization in isolation. The human deficiencies that pose the risk we have today may not look the same tomorrow. With the concurrent market growth of AI, bio-implant technologies, and genetic modification, the people of tomorrow may be better equipped to handle the threats we can imagine today.

We can’t safely bet on the unknowns, whether positive or negative, but we can prepare for the evidence-based scenarios we can predict. Although this new research paints an incomplete picture, it encourages us to use the tools currently at our disposal to mitigate the potential harms of weaponized AI. With thoughtful effort, and the willingness to try, we retain hope of weathering the future storm of inevitable change.

Top image credit: Storyblocks

Continue reading

Third-Party Repair Shops May Be Blocked From Servicing iPhone 12 Camera
Third-Party Repair Shops May Be Blocked From Servicing iPhone 12 Camera

According to a recent iFixit report, Apple's hostility to the right of repair has hit new heights with the iPhone 12 and iPhone 12 Pro.

MIT Creates Battery-Free Underwater GPS
MIT Creates Battery-Free Underwater GPS

GPS radio signals dissipate quickly when they hit water, causing a headache for scientific research at sea. The only alternative is to use acoustic systems that chew through batteries. A team from MIT has devised a battery-free tracking technology that could end this annoyance.

Musk: Tesla Was a Month From Bankruptcy During Model 3 Ramp-Up
Musk: Tesla Was a Month From Bankruptcy During Model 3 Ramp-Up

The Model 3 almost spelled doom for Tesla, but the same vehicle also probably saved it.

Space Mining Gets 400 Percent Boost From Bacteria, ISS Experiments Show
Space Mining Gets 400 Percent Boost From Bacteria, ISS Experiments Show

We'll need lots of raw materials to sustain human endeavors on other planets, and a new project on the International Space Station demonstrates how we can make space mining over 400 percent more efficient.