AI Insights
CyCon 2025 Series – Deciding with AI Systems: Rethinking Dynamics in Military Decision-Making

Editors’ note: This post is part of a series that features presentations at this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia. Its subject will be explored further as part of a chapter in the forthcoming book International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay. Kubo Mačák’s introductory post is available here.
International Humanitarian Law (IHL) builds on a delicate yet essential balance between military necessity and humanitarian imperatives. This balance is increasingly under pressure, marked by a concerning trend whereby IHL, originally intended for protection, is instead being used to justify destruction. Specifically, recent technological developments risk unveiling a new guise of protection: Artificial Intelligence Decision Support Systems (AI DSS), marketed as “tools” for increased accuracy and speed in military decision-making, including decisions on the use of force.
Current legal discussions rightly interrogate the risks of AI DSS in military decision-making. I suggest, however, that the challenges posed by AI DSS in military decision-making serve as a catalyst to critically reevaluate the role of humans in legally relevant military decision-making. Such a shift in approach would restore the balance struck by IHL and ensure that, as we progress, we do not abandon its protective core.
The Ideal Decision-Maker within IHL
The rules and principles of IHL primarily address States. Still, some provisions imply the presence of a human responsible for ensuring that means and methods of warfare comply with IHL norms. One of the few evident examples of such presence can be found in Article 57 of Additional Protocol I, particularly in its reference to “those who plan or decide upon an attack.” Though rarely made explicit within IHL, there is a widely held assumption among its expert community that legally significant decisions are ultimately made by human actors.
Importantly, this human decision-maker is not just any “human.” As my doctoral research explores, this figure is shaped by numerous assumptions. First, it is widely held that legal decisions are based on a rational decision-making process, suggesting that the interpretation of law requires an objective and value-neutral reasoning process to achieve an optimal outcome. IHL thus envisions a normative ideal of the human, devoid of emotions and personal biases, as being uniquely suited to make decisions in the context of armed conflict.
Second, this model of reasoning is closely tied to the belief that humans are capable of exercising control over technology. IHL presumes that compliance hinges on a level of consciousness whereby humans remain sufficiently aware of—or in “control” of—the output of the technological systems in use.
Together, these assumptions paint a specific and idealised picture of the human decision-maker within IHL. Given the emerging role of the human-AI DSS relationship in critical decision-making, this vision demands closer examination.
AI DSS: A “Tool” for Scientific Truth and Objectivity
Importantly, this dominant vision of IHL’s ideal decision-maker aligns with claims made by powerful AI scientists and business leaders. They argue that algorithmic decision-making, which relies on methods such as deduction, induction, and analogy, enhances legal certainty and facilitates neutral application by overcoming human bias and error.
While recognising that current AI systems have limitations, many of these stakeholders contend that technological neutrality can be preserved. Specifically, they maintain that issues with AI DSS can be addressed through technological solutions. For example, in addressing concerns about human operators’ limited understanding and trust in AI outputs, the Explainable AI programme published by the Defense Advanced Research Projects Agency (DARPA) claims that
[n]ew machine-learning systems will have the ability to explain their rationale, characterise their strengths and weaknesses, and convey an understanding of how they will behave in the future.
It follows that as AI DSS become more refined, they are expected to outperform humans across an increasing range of decision-making tasks. This depiction of AI DSS capabilities then implies that less human judgment will be needed, including in critical tasks such as targeting decisions. Unsurprisingly, military organisations appear eager to adopt these narratives. They tend to embrace this idealised image of AI DSS as entities that embody scientific neutrality in legally pertinent decisions, thereby liberating military decisions from human bias.
It is critical to interrogate whether this depiction of technological capability is accurate. Because if AI DSS are expected to provide the objectivity and neutrality that IHL seems to aspire to, what is the role of humans in legally relevant decision-making processes? Or does the normative pursuit of a strict binary between human and technology, along with the ideal of rationality, after all, risk undermining the protective essence of IHL?
Towards a Co-Constitutive Approach
My doctoral research suggests that a more reality-sensitive understanding of the human-AI DSS relationship can help us address these questions. We must first acknowledge that these systems do not exist in isolation. They are shaped by human choices, whether in prioritising certain technologies as more “useful” or “profitable” than others, or in the countless assumptions embedded by developers as they translate data into action. The upshot: every AI DSS reflects human values, priorities, and design decisions.
Yet this relationship is not one-sided. AI DSS also shape human behaviour, often subtly. These systems can frame decisions, influence trust, and steer action, frequently beyond one’s conscious awareness. Whether we follow a smartwatch’s rest suggestion, trust Google Maps’ directions, or rely on Netflix’s “top picks,” we all relate daily to such systems that affect our behaviour in predetermined ways.
From this mutually constitutive relationship, two crucial insights emerge. First, there is no neutral component in the human-AI DSS relationship: neither technology nor its users exist outside the social context in which they operate. Second, it invites us to reevaluate our conception of the human-technology relationship within IHL. Rather than viewing the relationship in terms of “control” at a fixed point, we should understand it as distributed and dynamic, extending across the entire lifecycle of a system, from the choice of investing in one system rather than another to development, deployment, and post-use evaluation.
This raises an important question: How can these insights contribute to legal discussions on the role of humans operating alongside AI DSS in legally relevant military decision-making?
The Human-AI DSS Relationship: An Opportunity
While my PhD explores the various opportunities offered by this shift in approach towards the human-AI relationship within IHL—and these will be further elaborated in the context of AI DSS in military decision-making in an upcoming book chapter—I want to highlight a frequently overlooked aspect, namely, how reliance on AI DSS may subtly, yet significantly, reshape the contours of IHL itself.
Recognising the mutual influence between humans and AI DSS means we cannot downplay legal concerns about AI DSS’ outputs. It is not because these outputs do not directly “classify” an individual’s status under IHL that their impact is minimal; far from it. In practice, AI DSS outputs are likely to affect, even if indirectly, how military decision-makers categorise and assess situations under IHL.
One relevant factor is that users may overlook the fact that the probabilistic logic used in these systems for decision-making tasks, such as predictive tasks, differs significantly from human normative logics of reasoning. Another factor relates to the assumptions embedded in those systems. Consider, for instance, the use of an AI DSS that employs remote biometrics to track gait or behavior to identify potential enemy threats. To uphold their promise of accuracy and capacity to enhance compliance with IHL, they must operate with accurate assumptions of what constitutes a “threat” and who counts as part of the “civilian population.” Unfortunately, these systems frequently embed ableist assumptions, overlooking the diversity of human bodies and behaviours present in armed conflicts.
As investigated here, a civilian person with a disability—who walks differently, uses assistive devices, or behaves in ways not represented in training data—risks being misidentified as a potential “threat.” When the AI DSS output aligns closely with human decision-makers’ own views or assumptions, such misclassifications are likely to go unnoticed, resulting in significant civilian harm and potentially amounting to violations of IHL. This risk is amplified by a well-documented human tendency toward automation bias, where individuals tend to uncritically trust technology, especially in high-pressure situations, such as targeting decisions.
Safeguards, including time and other restrictions that require human decision-makers to critically engage with AI DSS output, may mitigate such risks, but are unlikely to be sufficient.
The preservation of IHL’s protective aims demands safeguards that address how assumptions, biases, and other embedded logics permeate the entire human–AI lifecycle, including the application of IHL.
This is precisely the point where recognising the limits of IHL’s idealised notion of the human decision-maker—one who is perfectly rational and in control of technological “tools”—becomes crucial. Especially as long as technology companies, driven by vested interests around profit, play a central role in developing these technologies, a risk will remain that IHL’s sine qua non of protection will be undermined by systems presented as superior decision-makers based on a guise of accuracy and objectivity. The danger is that legal reasoning may quietly shift into algorithmically driven logic focused on optimisation and speed, without human users noticing. As a result, our understanding of IHL and its foundations may become increasingly misaligned with its application. All while we remain unaware of this divergence.
Concluding Thoughts
A renewed approach to the human-technology relationship within IHL is indispensable to shaping IHL’s culture of compliance. Instead of permitting industry interests and technological optimism to dominate IHL’s culture of compliance, this perspective encourages greater faith that we (humans) possess everything that is needed to make this culture a lived reality. Central to this perspective is a reflection on how AI DSS should relate to us and how we, in turn, should engage with these systems to foster a legal culture apt for responding to the evolving reality of military decision-making.
As explored elsewhere, I believe AI DSS have the potential to significantly enhance military decision-making while protecting civilians without compromising operational effectiveness. Many of these potentials lie in the decisions that take place well beyond the tactical level and should be driven by an ambition to upskill human decision-making capabilities. Research is ongoing on how non-invasive forms of AI DSS can help humans manage their emotions more effectively in decision-making. These systems aim to increase awareness and equip individuals with ways to take ownership of their emotions. This is just one example among many ways AI DSS can assist—rather than displace—humans in making emotionally complex IHL decisions in extreme situations.
In today’s context, where technologies are not merely “tools” for task execution but can fundamentally alter how legally relevant military decisions are made, such reflections need to form the foundation of any resilient culture of compliance with IHL.
***
Anna Rosalie Greipl is a Researcher at the Academy of International Humanitarian Law and Human Rights (Geneva Academy).
The views expressed are those of the author, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: U.S. Cyber Command, Josef Cole
AI Insights
Cyberattack on Evertec’s Sinqia Hits HSBC, Others in Brazil

Hackers on Friday broke into Sinqia, a financial technology provider owned by Evertec, attempting to steal around 420 million reais ($77.4 million) from several Brazilian financial institutions including HSBC Holdings Plc’s local operations, O Globo reported.
Cyber criminals invaded Sinqia’s systems used by Brazilian financial institutions and attempted to make several transfers through a fast-growing electronic payments system known as Pix. Sinqia confirmed the attack but said there was no evidence of suspicious activity in any system besides Pix.
AI Insights
Metal Gear Solid back with remake years after Kojima left Konami

Tom GerkenTechnology reporter

Metal Gear is one of the best-selling video game series in history, shifting more than 60 million copies.
The series pioneered cinematics in gaming by blending cutting-edge cutscenes, voice acting and dynamic camera angles to create something that would have looked more at home on the big screen at the time.
Metal Gear tackled themes not commonly seen in games, such as nuclear disarmament and child soldiers, and posed philosophical questions while also leveraging offbeat humour.
The games would often break the fourth wall and ask players to find solutions to puzzles in unusual ways – such as looking on the back cover of the game’s physical box.
The series’ significant place in gaming history meant fans were stunned when its creator Hideo Kojima quit game publisher Konami in an acrimonious split in 2015.
One of gaming’s biggest titles was left directionless – and there’s been no game in the best-selling series since.
But now, a decade later, Konami has released a remake of the third game in the series: Metal Gear Solid Delta.
So what happened between Konami and Kojima, and how does the new game hold up without its original creator?
Why did Kojima leave Konami?
“The impact Metal Gear has had on game-making makes it one of the most heralded entertainment franchises in the world, and made Hideo Kojima one of the industry’s most famous creators,” industry expert Christopher Dring told the BBC.
With such success, you might think it was a match made in heaven, but there were issues bubbling under the surface.
While nothing has been said publicly, one generally accepted theory behind the split relates to the spiralling cost of 2015’s Metal Gear Solid V, estimated by some at more than $80m (£59m) – a very significant development cost at the time.
It is not known exactly what happened between Konami and Kojima, but the studio was clearly fed up with the amount of money he was spending to make a single game – with Kojima’s internal studio actually removed from promotional materials for Metal Gear Solid V at the time.
Konami got the game out the door, but it seemed to be scaled back from its original vision despite the high cost, with repeated levels and a third chapter that never emerged.
Even so, the game still received excellent reviews and won several awards, but the rift between company and creator seemed unfixable.
And in an act that proved highly controversial – and perhaps shows how heated things had become behind the scenes – when Metal Gear Solid V won an award, Konami informed the developer he was not allowed to collect it.

A few months later, Kojima was gone, and in the years that followed, his former studio pivoted.
“Konami shifted its strategy for a while, away from console games, and focused its efforts on the amusements markets, things like pachinko machines,” Mr Dring said.
“They also focused increasingly on mobile.”
It meant Konami’s other classic franchises like Castlevania and Silent Hill also went without new games for a decade.
Meanwhile, Kojima’s new studio signed a blockbuster deal with Sony to develop the monster hit Death Stranding for PlayStation, followed by a sequel this year.
Why a remake now?
Gaming has pivoted towards remakes in recent years.
High-profile games like Resident Evil 4, Final Fantasy VII and Demon’s Souls, all classics in their day, have been remade with the benefits of modern graphics and game design to big fanfare – and strong sales figures.
“It’s a hugely lucrative and growing sector,” said Mr Dring.
“The industry is getting older, gamers are entering middle age and are nostalgic for classic titles.
Mr Drings points out that one of the best-selling games of the year so far is Elder Scrolls V: Oblivion Remastered, a remake of a classic Role-Playing Game (RPG) from 2007, selling millions of copies since its release in April.
Konami has begun a return to publishing games by focusing in this area, with a Silent Hill remake coming last year and a new Survival Kids game released earlier in 2025.
So it is a potentially lucrative move – but is Metal Gear Solid 3: Snake Eater the right game to remake?

Fans of the series told the BBC Metal Gear Solid 3 was chosen for good reason.
YouTuber Zak Ras said there was “immense significance” behind the game.
“Most people will say their favourite entry to the series is either Metal Gear Solid 1 or 3,” he said.
“Story-wise, given that it’s the first prequel set at the very beginning of the series timeline, it’s one of the few entries you can go into completely blind with absolutely no required knowledge of the series, other than very first Metal Gear from 1987.”
Ras said Metal Gear Solid 3 struck a good balance between gameplay and cinematic storytelling, making it a good choice for people who have never played a game in the series before.
For example, the game opens with an introduction heavily influenced by James Bond films, meaning new fans are eased into the series’ weirder elements.
And the brothers behind PythonSelkan Studios – known as Python & Selkan to their 122,000 YouTube subscribers – agreed.
“Completing the game was an incredible experience in itself,” they said. “Snake Eater’s gut-wrenching ending is what stood out most, leaving an impact on us that no other game had ever left before.”
“This game holds a special place in our hearts,” they added.
Metal Gear without Kojima
The brothers said, as lifelong fans of the series, they were “incredibly excited” by the announcement.
The pair are currently playing the remake, and have been “very impressed” by its improved graphics and audio.
They described the game as a “truly a faithful recreation”, adding that it improved “the essence of the original without changing its fundamental structure”.

So far so good for Metal Gear Solid without Hideo Kojima – which Ras put down to the game being true to the original.
One example he highlights is that the voice performances have been kept the same, and players can choose whether to use the original control scheme or a more modern take.
“There’s no doubt it is Kojima’s directorial ‘genes’ that are being dominantly expressed here,” he said.
“Kojima expressed a desire to move on from Metal Gear since as early as MGS2 and leave the series in the hands of others to continue.
“It may have taken him another 14 years and five director credits for that to happen, but it is now reality.”
And however the remake fares with fans, one household won’t be picking up a new copy – Kojima himself has laughed off the suggestion that he would play the new game.

AI Insights
Bitcoin Proxy’s Chief Seeks Funding Fix as ‘Flywheel’ Falters

Simon Gerovich, who turned a struggling Japanese hotelier into a Bitcoin stockpiler and investor darling, is feeling the heat.
Source link
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies