Sign up for our newsletter
Technology / AI and automation

Measuring destruction: Tracking war damage with AI

Researchers hope that combining machine learning with satellite imagery can help generate objective data on war damage in urban areas.

Empires have washed over the streets of Aleppo for three millennia, leaving behind the flotsam of Egyptian fortifications, Byzantine churches, Arabic markets and houses in the Ottoman style. Syria’s civil war, too, has scoured its own story on the city, leaving structures ancient and modern pockmarked and pummelled by bullets and barrel bombs.

Measuring this destruction reveals more than the history of a given conflict. By analysing high-resolution satellite imagery for signs of war damage, international aid agencies can identify which neighbourhoods are most affected by the conflict, and where best to deploy limited resources. Currently, however, this analysis requires manual tagging of photo sequences, a laborious and costly undertaking.

That may soon change. In a recent study, a combined team from Universitat Autònoma de Barcelona (UAB), the Institute of Economic Analysis at the Spanish National Research Council and Chapman University, California successfully automated this process for the analysis of heavy weaponry impacts – with profound implications for the surveillance of conflict zones for humanitarian ends.

Objectively measuring the destruction of war-torn cities such as Aleppo can help direct aid more effectively, researchers say. (Photo by Louai Beshara/AFP via Getty Images)

The team used a convolutional neural network (CNN) to automate the photo analysis, co-author André Groeger explains. Trained on sequences of satellite images from Aleppo and five other Syrian cities between 2011 and 2017, as well as human-annotated data on destruction acquired from the United Nations Satellite Team (UNOSAT), the model successfully traced the progression of war damage over the course of the civil war with a level of precision closely rivalling that of manual approaches.

White papers from our partners

Deployed at scale, the model could “be quicker than human annotators, leading to cost savings and timing advantages,” explains Groeger, and provide a more objective analysis of destruction than human annotators and conflicting reporters from the ground. “If you have an objective measure of a war’s intensity, one could eventually correlate this with media reporting,” he adds.

Monitoring war damage with AI

Dr Paige Arthur, deputy director of the Center on International Cooperation at NYU, welcomes the team’s innovation. Manual tagging remains highly effective, says Arthur, but is limited by capacity constraints. A team of researchers can only analyse so many photos at a time, the locations of which is itself informed by information of varying quality on where the conflict is at its most intense.

Outside of that zone of interest, and absent reports from trusted sources on the ground, “you really don’t know where the destruction is, or where the human rights abuses have taken place,” says Arthur. Using AI, with its ability to digest more imagery at greater speed, to track war damage could open the possibility of passively monitoring larger swathes of landscape, picking up subtle changes that indicate recent destruction or war crimes.

Arthur hopes that the model will be expanded beyond the Syrian context. There, she says, “much of the destruction is urban, given its status before the war as a middle-income country,” with heavy artillery and barrel bombs playing an outsize role in the war compared to others around the world. The model’s true potential, says Arthur, would be demonstrated in its capacity to automatically detect destruction on a smaller scale and in rural contexts, such as those found in the Central African Republic, or Afghanistan.

A map of destroyed buildings in Aleppo on September 18th, 2016 derived from manually-tagged satellite imagery. AI-aided approaches to the same problem could speed up such analyses and reduce associated costs. (Image reproduced from ‘Monitoring war destruction from space using machine learning’ by Mueller et al, PNAS, Jun 2021, 118 (23))

This is easier said than done, says Groeger. While he acknowledges that the next logical step for the model is to apply it to different conflict zones, this would require adapting the model to a host of new factors, including differences in architecture, geographies and even the impacts of other types of weaponry. “That makes this task much more challenging,” says Groeger – one that requires additional research.

Then there are the potential ethical implications in using such a tool. Both Groeger and Arthur acknowledge that, in the wrong hands, it could be used by combatants themselves to measure their own destructive impact against enemy targets, military and civilian. In the meantime, however, the constraints of commercial access to high-resolution satellite imagery, combined with the technical knowledge required in fine-tuning such a model, mean that it is unlikely to be used by anyone outside the largest international aid agencies.

One – UNOSAT – has already expressed an interest in partnering with Groeger and his team. “They’re very interested in trying to apply our methodology for their purposes,” says the professor, who hopes that this new partnership will help realise the manifest advantages such a tool will bestow on those trying to help those most in peril in conflict zones.

Greg Noone

Features writer

Greg Noone is a feature writer for Tech Monitor.