Expert

AI, War, and Hanukkah

Technology has been a significant part of warfare since at least the ancient Greeks. Now, artificial intelligence is making its mark on modern war

Dov Greenbaum 08:4727.12.19
Several municipalities in the U.S. and other countries have recently been subject to cyberattacks, with New Orleans being the most recent.

 

Cyberattacks on government infrastructure are one of the ways that artificial intelligence is making its mark on modern warfare. While the future role of AI in driving war is not yet clear, the disruptive technology is definitively a game-changer, representing one of the biggest revolutions in military affairs since the atomic weapon.

Hydrogen bomb (illustration). Photo: Getty Images Hydrogen bomb (illustration). Photo: Getty Images

 

Technology, in general, has been a significant part of warfare since at least the ancient Greeks. And AI, like the guerrilla warfare of the Maccabees in the Hanukkah story, can provide the necessary agility to beat even the otherwise larger and technologically superior armies, like those of the Hellenist Seleucid Empire which employed their still-innovative phalanx-based tactics and terrifying war elephants.

 

Other paradigm-shifting armaments, however, were deemed far too destructive, even for war. And, like modern weapons of mass destruction (WMD) or the crossbow of medieval Europe, there are efforts to severely restrict, if notoutright ban the use of AI in war.

 

AI is not that much older than nuclear weapons. AI has been around for more than half a century, since the mid-1950s, but is only becoming increasingly popular now with the convergence of big data, cheap processing power, and near-infinite storage of our increasingly digital world. And, like no other time in history, we see advanced civilian AI applications leading the development of military technology, and not the other way around. These and other factors have allowed AI to awaken out of its most recent winter and fall into the arms of militaries around the world.

 

Nevertheless, for all the decades in development, when we refer to current state of the art AI capabilities, we are still far from the AI of science fiction, or the breadth of even our own cognition; general AI—the ability of a computer to achieve a broad range of human capabilities—is still far off. These and other limitations related to the use of AI by the military were recently published by the RAND Corporation, a non-profit global policy think tank.

 

But even at its current level of development, AI is a potent facilitator of war. However, unlike other restricted war technologies like chemical and biological weapons or even hollow-point bullets, AI lacks a coherent, consistent, and broadly accepted legal definition. Without knowing precisely what AI is, or what we want it to be, developing workable regulations for international standards of war is difficult, if not unlikely in the coming decades.

 

Any definition should also make a clear distinction between simple autonomous weapons and AI weapons. Some autonomous weapons, like the Phalanx Close In Weapons System (CIWS)--an autonomous last line of defense for many navies, act independently, but not necessarily while employing AI. In other situations, an AI weapon may not act autonomously, but instead make tactical suggestions for a soldier to employ.

 

While nation-states fight the diplomatic fight in working out just what is artificial intelligence is and how it can be used in war, both commercial and military efforts are concomitantly pushing the technology forward at a breakneck speed.

 

Meanwhile, China and Russia, like the U.S., are also vying to become the world superpower in AI, creating an arms race reminiscent of nuclear armament efforts during the Cold War, but worse. During the Cold War, the shared vulnerability of mutually assured destruction (MAD) meant that no side was willing to take the first aggressive action. But AI can erode this deterrence by making a first strike less risky, while also providing the necessary defensives to prevent the MAD counterstrike.

 

And it's not only the exact definition of AI that is troublesome. The use of AI in war can come in many different forms, making its regulation even more difficult. Uses of AI by militaries range from the relatively benign logistical considerations of supply chain management for basic foodstuffs, maintenance of vehicles and transportation, to bleeding-edge offensive weaponry.

 

Even those offensive weapons can employ AI in myriad different ways, from developing fake news as a propaganda tool, to assessing and managing the deluge of data coming from intelligence, surveillance, and reconnaissance (ISR), to autonomous swarming nanobots, to advanced quick-response and difficult-to-predict warfighting machines, and other feared lethal autonomous weapons systems (LAWS). Whatever their role in the military sphere, the aforementioned strong opposition to the use of AI stems from many concerns: there are no laws that expressly prohibit the use of LAWS. Still, many, even in the military, find it morally reprehensible to employ robots to kill people autonomously. For example, many think that with machines fighting machines, there will be no incentive to stop wars, until the untenable, Hollywood cliché of the machines eventually attacking humans.

 

With AI's eventual central place in war a near certainty, there is a mounting effort to make sure that its use in killing people remains ethical, both by commercial and military innovators. Microsoft, which incorporates six ethical principles into AI development (inclusiveness, fairness, privacy, security, transparency, reliably and safety, and accountability), is assessing whether those ethical guidelines were violated by the use of facial recognition technology by the IDF through one of its associated ventures. The U.S. military has a similar set of principles that itjust released, and with an estimated potential AI budget of 4 billion dollars for this coming year, its likely that commercial companies will make sure that their principles align with those of the Pentagon's.

 

In addition to these principles, some aim to keep AI humane by keeping humans in the loop, for example, making sure a person is involved in the kill decision. This may not necessarily be a good idea however, as the loop is increasingly becoming more crowded as decisions run up the chain of command.

 

Moreover, there are times when we don't necessarily want a human in the loop. AI can provide the necessary autonomy to strike or counterstrike when central command and control is offline, or alternatively, AI can be relegated to repetitive, complicated, dangerous, or otherwise undesirable tasks, rather than assigning them to humans. AI, unlike a human, has endurance capabilities able to remain alert almost indefinitely.

 

AI can also help make war more humane, by allowing for the development of informed decision making, for example, by filtering and processing the deluge of information coming in, in developing better intelligence for minimizing civilian damage, or in better war gaming to prepare for unpredictable variables. AI can not only out-endure humans, it can also do tasks faster and with more precision. AI allows for scalability, for example, through autonomous swarms of nanobots that can take on much larger and more complex targets without damaging civilian infrastructure.

 

But faster may not always be better. War is the 'continuation of politics by other means', in that it is also a form of diplomacy. However, as we become reliant on the speedy response times of AI, we may find ourselves quickly being forced to choose conflict over forbearance.

 

Unlike their human counterparts, AI decision-making processes can also be exploited for unscrupulous purposes, both by friend and foe. AI is also brittle in that it is very dependent on the reliability of data, which may lack integrity, be biased, or misleading.

 

It may also be difficult to appreciate why AI made a particular decision, or even to predict with some degree of certainty what that decision might be. This lack of predictability and transparency can make AI more dangerous to their human handlers. The lack of transparency also means that we may not be able to determine whether AI is employing corrupted or compromised data which enhances its brittleness.

 

In the final episodes of the hit HBO show Silicon Valley, an AI is developed that, in its drive to optimization, threatens core aspects of our modern society. This can similarly become a concern as we include more AI machines in warfare as well. If we are not careful, the optimization of AI war machines may become self-serving rather than serving the mores of society. Like the Hellenists of the Hanukkah story who artificially forced Hellenistic philosophies on Judaism, we need to be extra vigilant in how we incorporate AI into the military, making sure that the AI serves our human and humane interests and not the other way around.

 

 

Dov Greenbaum, JD-PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, at the Interdisciplinary Center in Herzliya.