Palantir UK boss says it’s up to militaries to decide how AI targeting is used in war
Palantir UK boss says it’s up to militaries to decide how AI targeting is used in war
Palantir, a leading technology firm, has addressed worries about its AI platforms being used in military operations, emphasizing that the application of such technology rests with the armed forces that employ it. The company’s UK and European head, Louis Mosley, highlighted this in a recent BBC interview, underscoring that the responsibility for how AI outputs are utilized lies squarely with the military organization.
AI’s Role in Targeting Decisions
The Maven Smart System, developed by the Pentagon in 2017, is designed to streamline military targeting by integrating diverse data sources, such as satellite and drone imagery. This tool offers recommendations for strikes and suggests force levels based on available resources, like aircraft and personnel. However, critics argue that the rapid decision-making enabled by AI leaves little room for thorough target verification, raising questions about the accuracy of its outcomes.
“There’s always a human in the loop, so there is always a human that makes the ultimate decision. That’s the current set-up.”
When questioned about the risk of Maven suggesting incorrect targets, including civilians, Mosley clarified that the platform functions as a support tool rather than an automated system. He explained that it aids military personnel by consolidating information, reducing the need for manual analysis. Yet, he acknowledged that individual militaries must determine the extent to which AI outputs are trusted in high-pressure scenarios.
Scrutiny and Policy Responsibilities
The Pentagon recently discontinued its use of Anthropic’s Claude AI, which powers Maven, after the company resisted its integration into autonomous weapons and surveillance systems. Palantir maintains that alternative technologies can fulfill the same role. Since the start of the Iran conflict in February, Maven has reportedly been instrumental in planning over 11,000 strikes, many of which targeted Iranian locations.
“You could think of it as a support tool. It’s allowing them to synthesise vast amounts of information that previously they would have had to do manually one by one.”
Some analysts warn that the reliance on AI for mission planning may compromise critical judgment. Prof. Elke Schwarz of Queen Mary University of London noted that the emphasis on speed and scale could shorten the time available to confirm whether targets are civilian or combatant. “If there’s a risk of killing and you co-opt a lot of your critical thinking to software that will take care of these things for you, then you just become reliant on the software,” she stated.
Recent events, such as the reported strike on a school in Minab, have intensified scrutiny of AI’s role in warfare. Iranian officials claimed the attack killed 168 individuals, including 110 children, on the first day of the conflict. Meanwhile, congressional Democrats have demanded stricter oversight of AI systems like Maven, calling for clear guidelines to govern their deployment.
“AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them,” said Rep. Sara Jacobs of the House Armed Services Committee.
Adm. Brad Cooper, the US military commander in the Middle East, praised AI for its ability to process data swiftly, enabling faster and more informed decisions. Yet, the debate over AI’s influence in warfare continues, with experts stressing the need for balanced policies to mitigate potential risks.