Palantir UK boss Louis Mosley has told the BBC that the responsibility for using AI targeting systems in warfare lies with the military, not the company, according to an exclusive interview. This comes as concerns grow over the use of Palantir’s AI-powered defense platform, Maven Smart System, in US attacks on Iran.

AI in Military Operations

The Maven Smart System. Launched by the Pentagon in 2017. Is designed to speed up military targeting decisions by analyzing vast amounts of data, including intelligence, satellite, and drone images. The system provides recommendations for targeting and can suggest the level of force to use based on the availability of personnel and military hardware, such as aircraft.

Experts have raised concerns that the use of such tools in warfare could lead to incorrect targets being hit, including civilians. In February, the Pentagon announced that it would phase out Anthropic’s Claude AI system after the company refused to allow its use in autonomous weapons and surveillance. Palantir has stated that alternatives can replace the system.

Since the war with Iran began in February, the US has reportedly used Maven to plan strikes across the country. When asked about the risk of Maven suggesting incorrect targets, Mosley said the platform is only meant to serve as a guide to speed up the decision-making process for military personnel and should not be seen as an automated targeting system.

Human Oversight in AI Decisions

Mosley emphasized that there is always a human in the loop, ensuring that the final decision rests with military personnel. He stated, ‘There’s always a human in the loop, so there is always a human that makes the ultimate decision. That’s the current set-up.’

However, Mosley deferred to individual militaries when challenged on the risk of time-pressured commanders ordering their officers to take Maven’s output as being rubber-stamped. ‘That’s really a question for our military customers. They’re the ones that decide the policy framework that determines who gets to make what decision,’ he said.

Adm Brad Cooper, head of the US military in the Middle East, has praised AI systems for helping officers ‘sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.’

But some experts warn that the prioritization of speed and scale in mission planning creates significant risks. Prof Elke Schwarz of Queen Mary University of London said, ‘This prioritisation of speed and scale and the use of force then leaves very little time for meaningful verification of targets to make sure that they don’t include civilian targets accidentally.’

Scrutiny of AI in Warfare

In recent weeks, Pentagon officials have faced questions about whether AI tools such as Maven were used to identify targets in the deadly strike on a school in the Iranian town of Minab. Iranian officials said the strike killed 168 people, including around 110 children, on the opening day of the war.

In Congress, a number of senior Democrats have called for increased scrutiny of AI platforms like Maven. Rep Sara Jacobs, a member of the House Armed Services Committee, called for clearly enforced rules and regulations about how and when AI systems are used. ‘AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them,’ she told NBC News last month.

‘We have a responsibility to enforce strict guardrails on the military’s use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions.’

Mosley pushed back against suggestions that the speed of his company’s platform is rushing decision making at the Pentagon and potentially creating dangerous situations. He argued that the speed at which commanders are now taking action is a ‘consequence of the increased efficiency’ that Maven has enabled.

Citing ‘operational security’, the Pentagon declined to comment when approached by the BBC on how AI systems like Maven will be used in future or who would be held responsible should something go wrong. But officials in the US appear to be moving forward with plans to further integrate Maven into its systems.

Last week, the Reuters news agency reported that the Pentagon had designated Maven as ‘an official program of record’ — establishing it as a technology to be integrated long-term across the US military. In a letter obtained by Reuters, deputy Defence Secretary Steve Feinberg said the platform would provide commanders ‘with the latest tools necessary to detect, deter, and dominate our adversaries in all domains.’