How much regulation does AI in the defence industry need – or can it tolerate? In his keynote at the Enforce TAC Conference on 23 February 2026, AI expert Prof Dr Patrick Glauner of skyrocket.ai GmbH will speak about the real-world use of AI in military contexts.
Professor Glauner, you criticise the debate on AI and defence as often overly alarmist. Where do you see the greatest discrepancy between public perception and operational reality?
What I hear time and again – including from generals – is that political discussions immediately jump to the term “killer robots” as soon as AI in a military context is mentioned. This is a gross oversimplification. AI in the armed forces means far more than autonomous weapons systems: logistics, personnel planning, situational awareness, satellite imagery analysis - all of these already rely on AI today.
Of course there are fears, also fuelled by films like Terminator. But this narrow focus distorts reality. Autonomous or semi-autonomous weapons systems are only a small part of a much broader technological landscape.
The current security environment does not allow autonomous weapons to be banned outright or restricted to the point of operational irrelevance. Such overregulation would weaken the defence capability of Western democracies – while other actors would simply ignore these rules.
| The new Enforce Tac Conference |
|---|
|
With the topic “AI in the Defence Industry: Technology, Strategies, Opportunities and Regulatory Challenges”, Prof Patrick Glauner is one of four keynote speakers at the new Enforce Tac Conference on 23 February 2026 in Nuremberg. Conceived as a discussion platform for the use of electronics in security and defence technology, the conference will, for the first time, complement the Enforce Tac trade fair in Nuremberg. It will address key questions such as:
The Enforce Tac Conference provides an ideal platform for exchange between the defence industry, its extensive supplier base, the electronics sector and research institutions on the use of electronics to address current and future challenges in defence and security applications – within the established, secure framework of Enforce Tac as an international trade fair for internal and external security. |
You warn that overregulation could leave us at a disadvantage. Where do you see the greatest risk?
In international forums, civil society actors without deep technical understanding often dominate the debate. They quickly call for strict limitations or outright bans. The problem is that these rules ultimately apply only to us. Other actors - terrorists, for example - will not abide by them. The result is a one-sided self-restraint of the West.
You say AI has long become a necessity in the military. What has driven this – technology, operations or security policy?
Historically, wars have been decided by technological superiority. The transition from horse to tank was one such turning point. Today, it is the transition to AI.
If we look at current conflicts - such as between Azerbaijan and Armenia, or the war in Ukraine - we see how dramatically warfare has changed in a very short time. Drones, automated analysis, rapid decision cycles: none of this is conceivable without AI.
One may regret this, but wars do not simply disappear. And if other actors use AI, we must be able to operate on equal footing – state and non-state actors alike. This is about deterrence and defence capability.
Where are human reaction times clearly reaching their limits?
Above all where vast amounts of data must be processed in seconds - for example in sensor fusion or the Recognised Air Picture (RAP). Humans simply cannot operate at that speed.
Modern battlefields are highly dynamic. Decisions have to be taken in fractions of a second. If every action required manual approval, you would be structurally inferior to the opponent.
The assumption that humans are always faster or better decision-makers than technical systems is no longer tenable.
The CEO of Arx Robotics recently described how regulatory requirements - such as mandatory kill switches - made drones vulnerable in Ukraine. Does this support your criticism of excessive regulation?
Absolutely. In Ukraine, it is about daily survival. A kill switch may be well-intentioned from a regulatory perspective, but in operational reality it can have the opposite effect. If the adversary can exploit it, the system becomes useless.
This is a textbook example of well-meaning regulation ignoring technical reality. Security is not created by symbolic measures, but by robust systems.
AI is often required to be monitored and approved by a human. Why is this problematic in military operations?
The key term is “human in the loop”. In theory, it sounds reasonable. In practice, it often prevents autonomy altogether. If a system depends on continuous external approval, it becomes vulnerable – to jamming, cyberattacks or communication disruption. In contested environments, this does not work.
A permanent human-in-the-loop approach would render systems effectively unusable in highly dynamic combat situations.
Of course, humans must define targets and carry out legal assessments. The Geneva Conventions are clear on this. But once a target is defined, the system must be able to act autonomously. Anything else causes delays and ultimately increases risk -including for civilians.
You stress that autonomy does not mean lack of responsibility. How is responsibility ensured?
Responsibility is not created through constant human intervention, but through clear structures: precise target definition, verifiable decision logic, transparent systems and clear accountability.
If these are in place, AI can even reduce collateral damage. It acts without emotion, without panic, without revenge, and can select more precisely than humans. The use of autonomous systems does not absolve the state of responsibility. What matters is accountability for state action.
What role does AI play in electronic warfare?
A decisive one. When communications are disrupted or satellite links fail, systems must continue operating independently. A system that relies on constant feedback is worthless under such conditions.
In electronic warfare scenarios, autonomy is not optional - it is a prerequisite for operational capability.
AI systems evolve continuously. What does this mean for certification and operation?
AI develops over its entire life cycle. This challenges traditional certification models based on static systems. We need new concepts for testing, operation and maintenance – comparable to safety-critical software in aviation.
This is manageable, but it requires expertise in authorities, armed forces and industry.
You argue that international humanitarian law is sufficient. Where, then, is the real challenge?
In implementation. The Geneva Conventions provide clear rules. The problem is not the law, but the lack of technical and organisational capability to apply it.
It is about training, system understanding and clear processes. And about treating AI not as a black box, but as a tool that must be mastered.
How well prepared is Germany?
There has been progress, but also clear deficits. The Ministry of Defence is developing new strategies, but I still see considerable reluctance in Germany when it comes to concrete implementation. Other countries are moving faster and more decisively.
You call for greater technical competence. What do you mean by that?
Many fears are rooted in ignorance. Those who understand how AI works assess it differently. We need more training, more academic chairs, more research – also in defence-related fields. And we must modernise engineering degree programmes, otherwise we will lose the next generation.
Which assumption about AI in defence should be questioned most critically?
That AI is inherently unsafe. That it will inevitably get out of control. That it can only be controlled through maximum restriction.
The opposite is true: AI can be built safely. Much is already regulated in law. What we lack is the courage to implement.
Interview by Corinne Schindlbeck