New Research Warns of ‘Normal Accident’ From AI in Modern Warfare
Artificial intelligence provides vital solutions to some very complex problems but remains a looming threat as we consider its applications on the battlefield. New research details the impending risks and pins the fate of humanity on complex choices we will face in the near future.
A new paper from private intelligence and defense company ASRC Federal and the University of Maryland takes a deep dive into what could happen if we, as a society, choose to employ artificial intelligence in modern warfare. While many aspects of the paper focus on bleak and sobering scenarios that result in the eradication of humanity, it concludes with an understanding that this technology will exist and we can, too, if we make the right choices. Nevertheless, the researchers touch on two important inevitabilities: the development of weaponized artificial intelligence and an impending “normal accident” (more commonly known as a system accident) related to this technology.
Normal accidents, such as the Three Mile Island accident cited in the paper, occur through the implementation of complex technologies we can’t fully understand—at least, not yet. Artificial intelligence may fit the definition better than anything people have ever created. While we understand how AI works, we struggle to explain how it arrives at its conclusions for the same reason we struggle to do the same with people. With so many variables in the system to monitor, let alone analyze, the researchers raise an important concern about creating further abstraction from our tools of war:
An extension of the human-on-the-loop approach is the human-initiated “fire and forget” approach to battlefield AI. Once the velocity and dimensionality of the battlespace increase beyond human comprehension, human involvement will be limited to choosing the temporal and physical bounds of behavior desired in an anticipated context. Depending on the immediacy or unpredictability of the threat, engaging the system manually at the onset of hostilities may be impossible.
While MIT has discovered a method for predicting some AI “errors” in advance and just initiated a 50-year AI accelerator program with the US Air Force, such preparation only mitigates the potential damage we can predict. Just as we failed to preempt the psychological toll of smartphones—a “normal accident” of a different variety—we will fail to predict future damage caused by artificial intelligence. That’s already happening.
Normal accidents do not have to originate from a weakness of human ethics, though it’s often difficult to tell the difference. Regardless of their circumstances, these accidents occur as an unfortunately harsh phenomenon of growth. As the paper states, abstaining from the development of potentially dangerous technologies will not prevent their expansion through other governments and private citizens—malicious or otherwise. While we cannot stop the growth of AI and we cannot prevent the damage it will do, we can follow the paper’s proposal and do our best to mitigate potential harm.
While the researchers only suggest prohibition and regulation of AI, an ethics-based legal foundation can at least provide a framework for managing the problems as they arise. We can also offset the cost by investing our time and resources in machine learning projects that help save lives. Because the paper does not address other uses of artificial intelligence it assesses the potential risks of weaponization in isolation. The human deficiencies that pose the risk we have today may not look the same tomorrow. With the concurrent market growth of AI, bio-implant technologies, and genetic modification, the people of tomorrow may be better equipped to handle the threats we can imagine today.
We can’t safely bet on the unknowns, whether positive or negative, but we can prepare for the evidence-based scenarios we can predict. Although this new research paints an incomplete picture, it encourages us to use the tools currently at our disposal to mitigate the potential harms of weaponized AI. With thoughtful effort, and the willingness to try, we retain hope of weathering the future storm of inevitable change.
Top image credit: Storyblocks
Continue reading
New Intel Rocket Lake Details: Backwards Compatible, Xe Graphics, Cypress Cove
Intel has released a bit more information about Rocket Lake and its 10nm CPU that's been back-ported to 14nm.
RISC-V Tiptoes Towards Mainstream With SiFive Dev Board, High-Performance CPU
RISC V continues to make inroads across the market, this time with a cheaper and more fully-featured test motherboard.
ARMing for War: New Cortex-A78C Will Challenge x86 in the Laptop Market
ARM took another step towards challenging x86 in its own right with the debut of the Cortex-A78C this week. The new chip packs up to eight "big" CPU cores and up to an 8MB L3 cache.
Google Kills Free Photo Storage, Changes What Counts Toward Storage Caps
Google has announced some significant changes to Photos, especially if you use the service for automatic backup.