Georgia Tech researcher finds military can’t rely on AI for strategy or judgment

Newswise – Using artificial intelligence (AI) for warfare has been the promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology claims only some things can be automated and shows the value of human judgment.

“All hard problems in AI are really judgment and data problems, and what’s interesting about that is that when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war,” said Jon Lindsay, associate professor at the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. “You need human sense and to make moral, ethical and intellectual decisions in an incredibly confusing, tense and frightening situation.”

AI decision-making relies on four key elements: data about a situation, interpreting that data (or prediction), determining how best to act in accordance with goals and values ​​(or judgment) and action. Advances in machine learning have made predictions easier, making data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans need to step in, Lindsay and Professor Avi Goldfarb of the University of Toronto wrote in the article, “Prediction and Judgment: why artificial intelligence increases the importance of humans in war“, Posted in International Security.

Many policymakers assume that human soldiers could be replaced by automated systems, ideally making the military less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb argue that AI should not be seen as a substitute, but rather as a complement to existing human strategy.

“Machines are good for prediction, but they depend on data and judgment, and the hardest problems in war are information and strategy,” he said. “The conditions that make AI work in commerce are the most difficult conditions to meet in a military environment because of its unpredictability.”

One example highlighted by Lindsay and Goldfarb is mining company Rio Tinto, which uses self-driving trucks to transport materials, reducing costs and risks for human drivers. There are abundant, predictable, and unbiased data traffic patterns and maps that require little human intervention unless there are road closures or obstacles.

Warfare, however, generally lacks abundant unbiased data, and judgments about goals and values ​​are inherently contentious, but that doesn’t mean it’s impossible. Researchers say AI would be best employed in bureaucratically stabilized, task-by-task environments.

“All the excitement and fear is about killer robots and deadly vehicles, but the worst case scenario for military AI in practice will be the classic militaristic issues where you really depend on creativity and interpretation,” said said Lindsay. “But what we should be looking at are personnel systems, administration, logistics and repairs.”

According to the researchers, the use of AI also has consequences for the military and its adversaries. If humans are central to deciding when to use AI in warfare, the structure and hierarchies of military leadership could change depending on who is responsible for designing and cleaning data systems and making political decisions. . It also means that adversaries will seek to compromise both data and judgment, as they would largely affect the trajectory of war. Competing against the AI ​​can cause opponents to manipulate or disrupt data to make judging even more difficult. Indeed, human intervention will be even more necessary.

Yet this is only the beginning of argumentation and innovations.

“If AI automates prediction, that makes judgment and data really matter,” Lindsay said. “We have already automated many military actions with mechanized forces and precision weapons, then we have automated data collection with satellites and intelligence sensors, and now we are automating prediction with AI. So when are we going to automate judging, or are there components of judging that can’t be automated? »

Until then, however, tactical and strategic decision-making by humans continues to be the most important aspect of warfare.

QUOTE: Avi Goldfarb, Jon R. Lindsay; Prediction and judgment: why artificial intelligence increases the importance of humans in war. International Security 2022; 46 (3): 7–50. doi: https://doi.org/10.1162/isec_a_00425

About Georgia Institute of Technology

The Georgia Institute of Technology, or Georgia Tech, is a top 10 public research university developing leaders who advance technology and improve the human condition. The Institute offers degrees in business, computer science, design, engineering, liberal arts, and science. Its nearly 40,000 students representing 50 states and 149 countries study at the main campus in Atlanta, at campuses in France and China, and through distance and online learning. As a leading technology university, Georgia Tech is an engine of economic development for Georgia, the Southeast, and the country, conducting more than $1 billion in research annually for government, industry, and the society.

About Dianne Stinson

Check Also

UK defense cyber skills to be boosted through industrial partnership

Defense personnel must be qualified to deal with cyber threats Industry Collaboration to Increase UK …