AI Research
CyCon 2025 Series – Legal Reviews of Military Artificial Intelligence Capabilities

Editors’ note: This post is part of a series that features presentations at this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia. Its subject will be explored further as part of a chapter in the forthcoming book International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay. Kubo Mačák’s introductory post is available here.
Conversation among States about technological developments in how wars are fought invariably involve a discussion of the lawfulness or otherwise of those technologies under international law. During the drafting of the 1977 Additional Protocol I to the Geneva Conventions (AP I), some States proposed the creation of a special international mechanism to assess the lawfulness of new weapons. This proposal did not meet with widespread support. Instead, Article 36 of AP I now obliges High Contracting Parties to conduct national legal reviews of new weapons, means and methods of warfare.
In the lead-up to the AP I negotiations, meetings of governmental experts were concerned with future developments such as “geophysical, ecological, electronic and radiological warfare as well as with devices generating radiation, microwaves, infrasonic waves, light flashes and laser beams.” They were also mindful of the prospect that technological change “leads to the automation of the battlefield in which the soldier plays an increasingly less important role.” The rapid development and adoption of artificial intelligence (AI) in the military domain over the past few years testifies to the prescience of the experts. It also demonstrates a continuation of the pattern of legal concerns accompanying technological change.
States that adopt AI for military purposes may have to make some variations to the usual approach to legal reviews to account for the specific characteristics of this technology. In this post we discuss the characteristics of AI that may necessitate a tailored approach to legal reviews, and share, non-exhaustively, some ways in which States can enhance the effectiveness of legal reviews of AI-enabled military capabilities. The post anticipates the forthcoming Good Practices in the Legal Review of Weapons, developed together with Dr. Damian Copeland, Dr. Natalia Jevglevskaja, Dr. Lauren Sanders, and Dr. Renato Wolf, and reflects the presentation that Netta Goussac delivered at CyCon 2025 as part of a panel titled “International Law Perspectives on AI in Armed Conflict: Emerging Issues.”
Legal Reviews are Central to Governance of Military AI
Effective legal reviews are important because they can be a potent safeguard against the development and adoption of AI capabilities that are incapable of being used in compliance with international law regulating warfare. A key mechanism for implementing international humanitarian law (IHL) at the national level, legal reviews are a legal obligation for parties to AP I (art. 36) but can also be a part of a State’s general efforts to ensure the effective implementation of its IHL obligations, chiefly the treaty and customary rules relating to means and methods of warfare. Moreover, openness about the way in which legal reviews are carried out—even if their results cannot be revealed—serves as an important transparency and confidence-building measure.
The centrality of legal reviews in how States think about military AI is reflected in the frequent reference to them in recent national statements, as well as collective statements such as the Blueprint for Action adopted at the 2024 Summit on Responsible AI in the Military Domain (para. 11) and the Political Declaration on Responsible Military use of AI and Autonomy (para. B).
The Need For a Tailored Approach
The term “AI” is frequently used to denote computing techniques that perform tasks that would otherwise require human intelligence. It is a poorly understood yet ubiquitous umbrella term. Moreover, the “AI effect” or “odd paradox” means that what is first (aspirationally) considered AI is becomes simply “software” as soon as it usefully performs a function. The development, acquisition and employment of AI capabilities by militaries may therefore be seen as an “old problem.” Militaries have been using software for a variety of tasks for decades, without significant concerns regarding legality. However, there are some characteristics of AI as a technology, the applications in which it is used, and the way those applications are acquired and employed by militaries. This has implications for how States can review whether the AI-enabled capabilities they develop or acquire can be used lawfully by their militaries. In this post we will highlight four of these characteristics, though there are more.
The first characteristic relates to the wide range of applications of AI that are or may be of use to militaries. This means that States need to make decisions about what kinds of military AI applications will be subjected to legal reviews and the rules against which such capabilities will be assessed. Areas where AI has generated important opportunities for militaries include intelligence, surveillance and reconnaissance (ISR), maintenance and logistics, command and control (including targeting), information and electronic warfare, and autonomous systems. While some of these applications align neatly with categories of weapons, means and methods of warfare that States subject to legal reviews (e.g. autonomous weapon systems that rely on AI to identify, select or apply force to targets), others don’t (e.g. AI-enabled vessels or vehicles that are designed for, say, ISR but not the infliction of physical harm).
Relatedly, the wide range of applications means that the use of AI by militaries engages a broader range of international law rules. Traditionally, the IHL rules relating to means and methods of warfare (general and specific prohibitions), and rules of arms control and disarmament law, have been at the centre of legal reviews. When it comes to military AI applications, other norms of IHL may become relevant (especially rules and principles on the conduct of hostilities, i.e. distinction, proportionality and precautions), as well as other branches of international law, such as international human rights law, international environmental law, law of the sea, space law, and the law on the use of force.
The second characteristic relates to the reliability of military AI applications. The ability to carry out a legal review requires a full understanding of the relevant capability including the ability to foresee its effects in the normal or expected circumstances of use. But technical performance of an AI capability can be unreliable, and difficult to evaluate. The lack of transparency in how AI—and particularly machine learning—systems function complicates the traditional and important task of testing and evaluation. In the absence of an explanation for how a system reaches its output from a given input, it can make difficult (if not impossible) the task of assessing the system’s reliability and foreseeing the consequences of its use. This complicates the task of those conducting legal reviews in advance of employment of the capabilities, as well as legal advisers supporting military operations in real time.
Third, the development and acquisition of AI-based systems demands an iterative approach. This is a characteristic of the industry in which AI capabilities are being developed, as well as of the technology itself, which requires changes over time to maintain or improve safety and operational effectiveness. The acquisition and employment of some AI-enabled military capabilities is therefore more akin to how militaries procure services than to how they procure goods, potentially complicating the linear process of legal reviews within the acquisition/procurement process.
The final characteristic of military AI that we will mention here is the role played by industry actors, that is, entities outside of a State’s government or military. The obligation to legally review new weapons, means and methods of warfare remains with States, but industry actors play a crucial role in the process, particularly in the testing, evaluation, verification, and validation of AI capabilities. As we pointed out in a report we co-authored in 2024, “having designed and developed a particular weapon or weapon system, industry may have extensive amounts of performance data available that could be used in a legal review.” When information and expertise about AI-enabled military capabilities sits outside of the military itself, it becomes critical for States to plan whether and how to make use of this information and expertise, including as part of legal reviews. Our research indicates there are some barriers to information sharing between States and industry, including contractual and proprietary issues, that States will need to think through.
Enhancing Effectiveness Through a Tailored Approach
To fulfil the potential of legal reviews as a safeguard against AI-enabled military capabilities that cannot be used in compliance with international law, and to facilitate compliance with their international obligations when developing and using military AI capabilities, States will need to adapt their approach to the characteristics of AI. This adaptation is of relevance to all States that are developing or using AI-enabled military capabilities, regardless of whether they already conduct legal reviews and are wishing to strengthen their process or are considering whether and how to conduct legal reviews of military AI capabilities.
In our forthcoming publication, we compile a list of good practices that can enhance the efficacy of legal reviews. Here, we preview some of the observations that underpin these good practices that are relevant to States who are developing or acquiring military AI capabilities.
Legal Reviews are Part of a Decision-Making Process
While the publicly available State practice is relatively limited, our research—which draws on submissions made to the Group of Governmental Experts on lethal autonomous weapon systems, and consultations with governmental, industry, civil society and academic experts—indicates that States that conduct legal reviews view them as part of their broader process for design, development and acquisition of military capabilities. In general, the output of a legal review (in effect, legal advice) is complemented by advice from other sources, including technical, operational, strategic, policy or ethical advice.
Where a legal review concludes that the normal or expected use of a AI-enabled military capability is unlawful, such advice should overrule any advice militating towards employment of the capability. However, where a legal review concludes that the normal or expected use of an AI capability is lawful, or lawful under certain circumstances or conditions, a decision-maker may have regard to additional inputs when deciding whether to authorise the relevant stage of development or acquisition.
While this observation is not novel to AI-enabled military capabilities, integration of legal reviews into a broader decision-making process is particularly important when it comes to the development and use of such capabilities and has triggered the creation of new policy frameworks in some States.
The Decision Whether and How to Conduct a Legal Review Need Not be Legalistic
A State could conduct a legal review of a military AI capability because of the State’s specific obligation under international law to undertake the review (e.g. AP I, art. 36), the State’s interpretation of its general obligations to implement IHL, or the State’s national law or policy. For reasons that we do not have the space to mention here, the language of Article 36 remains a useful guidepost for States in conducting legal reviews, no matter the basis upon which a State conducts them. Efforts to fulfil the requirements of Article 36 may lead States and experts to interpret the text to determine whether a particular military AI capability is “new,” or is a “weapon, means or method of warfare,” and whether a legal review should be limited to the question of whether a capability is “prohibited” under international law, as compared with the question of the circumstances under which it can be used lawfully.
In our view, an overly narrow or legalistic approach to whether a legal review is required and how it is to be conducted may limit the utility of this important tool. As Damian Copeland wrote for Articles of War in 2024, States can take a functional approach to legal reviews. This would mean analysing the functions of a particular AI-enabled capability to determine whether those functions are regulated by international law in any way and assessing the capability to ensure the ability to comply with those rules.
Legal Reviews as Part of Accelerated Processes
States may be adapting (or considering whether to adapt) procurement pathways in response to pressure to accelerate procurement, deployment, and scaling of military AI capabilities. A key challenge for States, and one which we think could be a line of effort within initiatives to govern the development and use of AI-enabled military capabilities, is how to manage time and resources needed to integrate legal reviews in a non-linear and iterative development and acquisition process where reliability is difficult to assess. It is critical that States carefully and systematically locate and synchronise legal reviews as part of these pathways, in order to ensure that such reviews can complement and feed into broader policy processes, and continue to be an effective safeguard against the development and adoption of AI-enabled capabilities that are incapable of being used in compliance with international law.
Realising the Potential of Legal Reviews as a Safeguard and Mechanism
Military adoption of AI-enabled capabilities, like adoption of earlier technologies in warfare, has prompted a discussion of the lawfulness of such capabilities under international law and how to assess whether a capability is able to be used in compliance with a State’s obligations. Legal reviews are a potent tool in preventing the employment of unlawful capabilities in armed conflict as well as facilitating compliance with IHL and other relevant international law rules.
Conducting legal reviews at the national level does not perform the same role, nor have the same effect, as adoption of governance or regulatory measures at the international level. However, legal reviews can (and will inevitably) support the implementation of policy measures (of any legal status) adopted at the international level. This is particularly true while there is no agreed verification regime among States with respect to their military AI capabilities.
To make full use of this tool, States will need to tailor how they conduct legal reviews at the national level, to acknowledge both persistent challenges in conducting legal reviews and novel challenges associated with the characteristics of AI. At the international level, openness about how legal reviews of AI-enabled capabilities are carried out (if not the outcomes of specific reviews) should be considered in initiatives to govern the development and use of AI-enabled military capabilities.
***
Netta Goussac is a Senior Researcher in the SIPRI Governance of Artificial Intelligence Programme.
Rain Liivoja is a Professor of Law at the University of Queensland, and a Senior Fellow with the Lieber Institute for Law and Land Warfare at West Point.
The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: Getty Images via Unsplash
AI Research
Training on AI, market research, raising capital offered through Jamestown Regional Entrepreneur Center – Jamestown Sun

Sevearl training events will be held for the public through the Jamestown Regional Entrepreneur Center in September.
On Sept. 9-12, a “Get Found Masterclass” will be offered to the public. This four-part workshop series is designed specifically for small business service providers who are focused on growth through smarter systems, trusted tools and clear visibility strategies. Across four focused sessions, participants will learn how to protect their brand while embracing automation, use Google’s free tools to enhance online visibility and send the right visibility signals to today’s AI-powered search engines. Participants will discover how AI can support small businesses, how to build ethical systems that scale, and what really influences trust, authority and ranking.
On Sept. 10, a stand-alone workshop on “Market and Customer Research” will be held. This workshop will guide participants on where and how to find customers. The presentation will
also discuss which SEO keywords competitors are using for free. Participants will compare current methods of social media marketing and discuss the variety of free market research tools that offer critical information on your industry and customers.
On Sept. 23, “AI tools for Social Media Marketing” is planned. Discuss the use of tools like ChatGPT to brainstorm post ideas, captions, and scripts; Lately.ai to repurpose long-form content into social media snippets; Canva + Magic Studio for fast, on-brand visuals and Metricool or later for AI-assisted scheduling and analytics. Automate so you can focus on connection and creativity.
On Sept. 24 is a high-level presentation led by Kat Steinberg, special counsel, and Amy Reischauer, deputy director of the of the Securities and Exchange Commision’s Office of the Advocate for Small Business Capital Formation, on the regulatory framework and SEC resources surrounding raising capital. They will also share broad data from their most recent annual report on what has been happening in capital raising in recent years. The office seeks to advocate and advance the interests of small businesses seeking to raise capital and the investors who support them at the SEC and in the capital markets. The office develops comprehensive educational materials and resources while actively engaging with
industry stakeholders to identify both obstacles and emerging opportunities in the capital
formation landscape. Through events like this, the office creates platforms for meaningful
dialogue, collecting valuable feedback and disseminating insights about capital-raising
pathways for small businesses from early-stage startups to established small public companies.
To register for these training events, visit www.JRECenter.com/Events. Follow the Jamestown
Regional Entrepreneur Center at Facebook.com/JRECenter, on Instagram at JRECenter and on
LinkedIn. Questions may be directed to Katherine.Roth@uj.edu.
AI Research
1 Brilliant Artificial Intelligence (AI) Stock Down 30% From Its All-Time High That’s a No-Brainer Buy

ASML is one of the world’s most critical companies.
Few companies’ products are as critical to the modern world’s technological infrastructure as those made by ASML (ASML 3.75%). Without the chipmaking equipment the Netherlands-based manufacturer provides, much of the world’s most innovative technology wouldn’t be possible. That makes it one of the most important companies in the world, even if many people have never heard of it.
Over the long term, ASML has been a profitable investment, but the stock has struggled recently — it’s down by more than 30% from the all-time high it touched in July 2024. I believe this pullback presents an excellent opportunity to buy shares of this key supporting player for the AI sector and other advanced technologies.
Image source: Getty Images.
ASML has been a victim of government policies around the globe
ASML makes lithography machines, which trace out the incredibly fine patterns of the circuits on silicon chips. Its top-of-the-line extreme ultraviolet (EUV) lithography machines are the only ones capable of printing the newest, most powerful, and most feature-dense chips. No other companies have been able to make EUV machines thus far. They are also highly regulated, as Western nations don’t want this technology going to China, so the Dutch and U.S. governments have put strict restrictions on the types of machines ASML can export to China or its allies. In fact, even tighter new regulations were put in place last year that prevented ASML from servicing some machines that it previously was allowed to sell to Chinese companies.
As a result of these export bans, ASML’s sales to one of the world’s largest economies have been curtailed. This led to investors bidding the stock down in 2024 — a drop it still hasn’t recovered from.
2025 has been a relatively strong year for ASML’s business, but tariffs have made it challenging to forecast where matters are headed. Management has been cautious with its guidance for the year as it is unsure of how tariffs will affect the business. In its Q2 report, management stated that tariffs had had a less significant impact in the quarter than initially projected. As a result, ASML generated 7.7 billion euros in sales, which was at the high end of its 7.2 billion to 7.7 billion euro guidance range. For Q3, the company says it expects sales of between 7.4 billion and 7.9 billion euros, but if tariffs have a significantly negative impact on the economic picture, it could come up short.
Given all the planned spending on new chip production capacity to meet AI-related demand, investors would be wise to assume that ASML will benefit. However, the company is staying conservative in its guidance even as it prepares for growth. This conservative stance has caused the market to remain fairly bearish on ASML’s outlook even as all signs point toward a strong 2026.
This makes ASML a buying opportunity at its current stock price.
ASML’s valuation hasn’t been this low since 2023
Compared to the last five years, ASML trades at a historically low price-to-earnings (P/E) ratio and a forward P/E ratio.
ASML PE Ratio data by YCharts.
With expectations for ASML at low levels, investors shouldn’t be surprised if its valuation rises sometime over the next year, particularly if management’s commentary becomes more bullish as demand increases in line with chipmakers’ efforts to expand their production capacity.
This could lift ASML back into its more normal valuation range in the mid-30s, which is perfectly acceptable given its growth level, considering that it has no direct competition.
ASML is a great stock to buy now and hold for several years or longer, allowing you to reap the benefits of chipmakers increasing their production capacity. Just because the market isn’t that bullish on ASML now, that doesn’t mean it won’t be in the future. This rare moment offers an ideal opportunity to load up on shares of a stock that I believe is one of the best values in the market right now.
AI Research
AI’s not ‘reasoning’ at all – how this team debunked the industry hype

Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- We don’t entirely know how AI works, so we ascribe magical powers to it.
- Claims that Gen AI can reason are a “brittle mirage.”
- We should always be specific about what AI is doing and avoid hyperbole.
Ever since artificial intelligence programs began impressing the general public, AI scholars have been making claims for the technology’s deeper significance, even asserting the prospect of human-like understanding.
Scholars wax philosophical because even the scientists who created AI models such as OpenAI’s GPT-5 don’t really understand how the programs work — not entirely.
Also: OpenAI’s Altman sees ‘superintelligence’ just around the corner – but he’s short on details
AI’s ‘black box’ and the hype machine
AI programs such as LLMs are infamously “black boxes.” They achieve a lot that is impressive, but for the most part, we cannot observe all that they are doing when they take an input, such as a prompt you type, and they produce an output, such as the college term paper you requested or the suggestion for your new novel.
In the breach, scientists have applied colloquial terms such as “reasoning” to describe the way the programs perform. In the process, they have either implied or outright asserted that the programs can “think,” “reason,” and “know” in the way that humans do.
In the past two years, the rhetoric has overtaken the science as AI executives have used hyperbole to twist what were simple engineering achievements.
Also: What is OpenAI’s GPT-5? Here’s everything you need to know about the company’s latest model
OpenAI’s press release last September announcing their o1 reasoning model stated that, “Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem,” so that “o1 learns to hone its chain of thought and refine the strategies it uses.”
It was a short step from those anthropomorphizing assertions to all sorts of wild claims, such as OpenAI CEO Sam Altman’s comment, in June, that “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence.”
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The backlash of AI research
There is a backlash building, however, from AI scientists who are debunking the assumptions of human-like intelligence via rigorous technical scrutiny.
In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors — Chengshuai Zhao and colleagues at Arizona State University — took apart the reasoning claims through a simple experiment. What they concluded is that “chain-of-thought reasoning is a brittle mirage,” and it is “not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching.”
Also: Sam Altman says the Singularity is imminent – here’s why
The term “chain of thought” (CoT) is commonly used to describe the verbose stream of output that you see when a large reasoning model, such as GPT-o1 or DeepSeek V1, shows you how it works through a problem before giving the final answer.
That stream of statements isn’t as deep or meaningful as it seems, write Zhao and team. “The empirical successes of CoT reasoning lead to the perception that large language models (LLMs) engage in deliberate inferential processes,” they write.
But, “An expanding body of analyses reveals that LLMs tend to rely on surface-level semantics and clues rather than logical procedures,” they explain. “LLMs construct superficial chains of logic based on learned token associations, often failing on tasks that deviate from commonsense heuristics or familiar templates.”
The term “chains of tokens” is a common way to refer to a series of elements input to an LLM, such as words or characters.
Testing what LLMs actually do
To test the hypothesis that LLMs are merely pattern-matching, not really reasoning, they trained OpenAI’s older, open-source LLM, GPT-2, from 2019, by starting from scratch, an approach they call “data alchemy.”
The model was trained from the beginning to just manipulate the 26 letters of the English alphabet, “A, B, C,…etc.” That simplified corpus lets Zhao and team test the LLM with a set of very simple tasks. All the tasks involve manipulating sequences of the letters, such as, for example, shifting every letter a certain number of places, so that “APPLE” becomes “EAPPL.”
Also: OpenAI CEO sees uphill struggle to GPT-5, potential for new kind of consumer hardware
Using the limited number of tokens, and limited tasks, Zhao and team vary which tasks the language model is exposed to in its training data versus which tasks are only seen when the finished model is tested, such as, “Shift each element by 13 places.” It’s a test of whether the language model can reason a way to perform even when confronted with new, never-before-seen tasks.
They found that when the tasks were not in the training data, the language model failed to achieve those tasks correctly using a chain of thought. The AI model tried to use tasks that were in its training data, and its “reasoning” sounds good, but the answer it generated was wrong.
As Zhao and team put it, “LLMs try to generalize the reasoning paths based on the most similar ones […] seen during training, which leads to correct reasoning paths, yet incorrect answers.”
Specificity to counter the hype
The authors draw some lessons.
First: “Guard against over-reliance and false confidence,” they advise, because “the ability of LLMs to produce ‘fluent nonsense’ — plausible but logically flawed reasoning chains — can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability.”
Also, try out tasks that are explicitly not likely to have been contained in the training data so that the AI model will be stress-tested.
Also: Why GPT-5’s rocky rollout is the reality check we needed on superintelligence hype
What’s important about Zhao and team’s approach is that it cuts through the hyperbole and takes us back to the basics of understanding what exactly AI is doing.
When the original research on chain-of-thought, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” was performed by Jason Wei and colleagues at Google’s Google Brain team in 2022 — research that has since been cited more than 10,000 times — the authors made no claims about actual reasoning.
Wei and team noticed that prompting an LLM to list the steps in a problem, such as an arithmetic word problem (“If there are 10 cookies in the jar, and Sally takes out one, how many are left in the jar?”) tended to lead to more correct solutions, on average.
They were careful not to assert human-like abilities. “Although chain of thought emulates the thought processes of human reasoners, this does not answer whether the neural network is actually ‘reasoning,’ which we leave as an open question,” they wrote at the time.
Also: Will AI think like humans? We’re not even close – and we’re asking the wrong question
Since then, Altman’s claims and various press releases from AI promoters have increasingly emphasized the human-like nature of reasoning using casual and sloppy rhetoric that doesn’t respect Wei and team’s purely technical description.
Zhao and team’s work is a reminder that we should be specific, not superstitious, about what the machine is really doing, and avoid hyperbolic claims.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi