AI Research
CyCon 2025 Series – Legal Reviews of Military Artificial Intelligence Capabilities

Editors’ note: This post is part of a series that features presentations at this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia. Its subject will be explored further as part of a chapter in the forthcoming book International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay. Kubo Mačák’s introductory post is available here.
Conversation among States about technological developments in how wars are fought invariably involve a discussion of the lawfulness or otherwise of those technologies under international law. During the drafting of the 1977 Additional Protocol I to the Geneva Conventions (AP I), some States proposed the creation of a special international mechanism to assess the lawfulness of new weapons. This proposal did not meet with widespread support. Instead, Article 36 of AP I now obliges High Contracting Parties to conduct national legal reviews of new weapons, means and methods of warfare.
In the lead-up to the AP I negotiations, meetings of governmental experts were concerned with future developments such as “geophysical, ecological, electronic and radiological warfare as well as with devices generating radiation, microwaves, infrasonic waves, light flashes and laser beams.” They were also mindful of the prospect that technological change “leads to the automation of the battlefield in which the soldier plays an increasingly less important role.” The rapid development and adoption of artificial intelligence (AI) in the military domain over the past few years testifies to the prescience of the experts. It also demonstrates a continuation of the pattern of legal concerns accompanying technological change.
States that adopt AI for military purposes may have to make some variations to the usual approach to legal reviews to account for the specific characteristics of this technology. In this post we discuss the characteristics of AI that may necessitate a tailored approach to legal reviews, and share, non-exhaustively, some ways in which States can enhance the effectiveness of legal reviews of AI-enabled military capabilities. The post anticipates the forthcoming Good Practices in the Legal Review of Weapons, developed together with Dr. Damian Copeland, Dr. Natalia Jevglevskaja, Dr. Lauren Sanders, and Dr. Renato Wolf, and reflects the presentation that Netta Goussac delivered at CyCon 2025 as part of a panel titled “International Law Perspectives on AI in Armed Conflict: Emerging Issues.”
Legal Reviews are Central to Governance of Military AI
Effective legal reviews are important because they can be a potent safeguard against the development and adoption of AI capabilities that are incapable of being used in compliance with international law regulating warfare. A key mechanism for implementing international humanitarian law (IHL) at the national level, legal reviews are a legal obligation for parties to AP I (art. 36) but can also be a part of a State’s general efforts to ensure the effective implementation of its IHL obligations, chiefly the treaty and customary rules relating to means and methods of warfare. Moreover, openness about the way in which legal reviews are carried out—even if their results cannot be revealed—serves as an important transparency and confidence-building measure.
The centrality of legal reviews in how States think about military AI is reflected in the frequent reference to them in recent national statements, as well as collective statements such as the Blueprint for Action adopted at the 2024 Summit on Responsible AI in the Military Domain (para. 11) and the Political Declaration on Responsible Military use of AI and Autonomy (para. B).
The Need For a Tailored Approach
The term “AI” is frequently used to denote computing techniques that perform tasks that would otherwise require human intelligence. It is a poorly understood yet ubiquitous umbrella term. Moreover, the “AI effect” or “odd paradox” means that what is first (aspirationally) considered AI is becomes simply “software” as soon as it usefully performs a function. The development, acquisition and employment of AI capabilities by militaries may therefore be seen as an “old problem.” Militaries have been using software for a variety of tasks for decades, without significant concerns regarding legality. However, there are some characteristics of AI as a technology, the applications in which it is used, and the way those applications are acquired and employed by militaries. This has implications for how States can review whether the AI-enabled capabilities they develop or acquire can be used lawfully by their militaries. In this post we will highlight four of these characteristics, though there are more.
The first characteristic relates to the wide range of applications of AI that are or may be of use to militaries. This means that States need to make decisions about what kinds of military AI applications will be subjected to legal reviews and the rules against which such capabilities will be assessed. Areas where AI has generated important opportunities for militaries include intelligence, surveillance and reconnaissance (ISR), maintenance and logistics, command and control (including targeting), information and electronic warfare, and autonomous systems. While some of these applications align neatly with categories of weapons, means and methods of warfare that States subject to legal reviews (e.g. autonomous weapon systems that rely on AI to identify, select or apply force to targets), others don’t (e.g. AI-enabled vessels or vehicles that are designed for, say, ISR but not the infliction of physical harm).
Relatedly, the wide range of applications means that the use of AI by militaries engages a broader range of international law rules. Traditionally, the IHL rules relating to means and methods of warfare (general and specific prohibitions), and rules of arms control and disarmament law, have been at the centre of legal reviews. When it comes to military AI applications, other norms of IHL may become relevant (especially rules and principles on the conduct of hostilities, i.e. distinction, proportionality and precautions), as well as other branches of international law, such as international human rights law, international environmental law, law of the sea, space law, and the law on the use of force.
The second characteristic relates to the reliability of military AI applications. The ability to carry out a legal review requires a full understanding of the relevant capability including the ability to foresee its effects in the normal or expected circumstances of use. But technical performance of an AI capability can be unreliable, and difficult to evaluate. The lack of transparency in how AI—and particularly machine learning—systems function complicates the traditional and important task of testing and evaluation. In the absence of an explanation for how a system reaches its output from a given input, it can make difficult (if not impossible) the task of assessing the system’s reliability and foreseeing the consequences of its use. This complicates the task of those conducting legal reviews in advance of employment of the capabilities, as well as legal advisers supporting military operations in real time.
Third, the development and acquisition of AI-based systems demands an iterative approach. This is a characteristic of the industry in which AI capabilities are being developed, as well as of the technology itself, which requires changes over time to maintain or improve safety and operational effectiveness. The acquisition and employment of some AI-enabled military capabilities is therefore more akin to how militaries procure services than to how they procure goods, potentially complicating the linear process of legal reviews within the acquisition/procurement process.
The final characteristic of military AI that we will mention here is the role played by industry actors, that is, entities outside of a State’s government or military. The obligation to legally review new weapons, means and methods of warfare remains with States, but industry actors play a crucial role in the process, particularly in the testing, evaluation, verification, and validation of AI capabilities. As we pointed out in a report we co-authored in 2024, “having designed and developed a particular weapon or weapon system, industry may have extensive amounts of performance data available that could be used in a legal review.” When information and expertise about AI-enabled military capabilities sits outside of the military itself, it becomes critical for States to plan whether and how to make use of this information and expertise, including as part of legal reviews. Our research indicates there are some barriers to information sharing between States and industry, including contractual and proprietary issues, that States will need to think through.
Enhancing Effectiveness Through a Tailored Approach
To fulfil the potential of legal reviews as a safeguard against AI-enabled military capabilities that cannot be used in compliance with international law, and to facilitate compliance with their international obligations when developing and using military AI capabilities, States will need to adapt their approach to the characteristics of AI. This adaptation is of relevance to all States that are developing or using AI-enabled military capabilities, regardless of whether they already conduct legal reviews and are wishing to strengthen their process or are considering whether and how to conduct legal reviews of military AI capabilities.
In our forthcoming publication, we compile a list of good practices that can enhance the efficacy of legal reviews. Here, we preview some of the observations that underpin these good practices that are relevant to States who are developing or acquiring military AI capabilities.
Legal Reviews are Part of a Decision-Making Process
While the publicly available State practice is relatively limited, our research—which draws on submissions made to the Group of Governmental Experts on lethal autonomous weapon systems, and consultations with governmental, industry, civil society and academic experts—indicates that States that conduct legal reviews view them as part of their broader process for design, development and acquisition of military capabilities. In general, the output of a legal review (in effect, legal advice) is complemented by advice from other sources, including technical, operational, strategic, policy or ethical advice.
Where a legal review concludes that the normal or expected use of a AI-enabled military capability is unlawful, such advice should overrule any advice militating towards employment of the capability. However, where a legal review concludes that the normal or expected use of an AI capability is lawful, or lawful under certain circumstances or conditions, a decision-maker may have regard to additional inputs when deciding whether to authorise the relevant stage of development or acquisition.
While this observation is not novel to AI-enabled military capabilities, integration of legal reviews into a broader decision-making process is particularly important when it comes to the development and use of such capabilities and has triggered the creation of new policy frameworks in some States.
The Decision Whether and How to Conduct a Legal Review Need Not be Legalistic
A State could conduct a legal review of a military AI capability because of the State’s specific obligation under international law to undertake the review (e.g. AP I, art. 36), the State’s interpretation of its general obligations to implement IHL, or the State’s national law or policy. For reasons that we do not have the space to mention here, the language of Article 36 remains a useful guidepost for States in conducting legal reviews, no matter the basis upon which a State conducts them. Efforts to fulfil the requirements of Article 36 may lead States and experts to interpret the text to determine whether a particular military AI capability is “new,” or is a “weapon, means or method of warfare,” and whether a legal review should be limited to the question of whether a capability is “prohibited” under international law, as compared with the question of the circumstances under which it can be used lawfully.
In our view, an overly narrow or legalistic approach to whether a legal review is required and how it is to be conducted may limit the utility of this important tool. As Damian Copeland wrote for Articles of War in 2024, States can take a functional approach to legal reviews. This would mean analysing the functions of a particular AI-enabled capability to determine whether those functions are regulated by international law in any way and assessing the capability to ensure the ability to comply with those rules.
Legal Reviews as Part of Accelerated Processes
States may be adapting (or considering whether to adapt) procurement pathways in response to pressure to accelerate procurement, deployment, and scaling of military AI capabilities. A key challenge for States, and one which we think could be a line of effort within initiatives to govern the development and use of AI-enabled military capabilities, is how to manage time and resources needed to integrate legal reviews in a non-linear and iterative development and acquisition process where reliability is difficult to assess. It is critical that States carefully and systematically locate and synchronise legal reviews as part of these pathways, in order to ensure that such reviews can complement and feed into broader policy processes, and continue to be an effective safeguard against the development and adoption of AI-enabled capabilities that are incapable of being used in compliance with international law.
Realising the Potential of Legal Reviews as a Safeguard and Mechanism
Military adoption of AI-enabled capabilities, like adoption of earlier technologies in warfare, has prompted a discussion of the lawfulness of such capabilities under international law and how to assess whether a capability is able to be used in compliance with a State’s obligations. Legal reviews are a potent tool in preventing the employment of unlawful capabilities in armed conflict as well as facilitating compliance with IHL and other relevant international law rules.
Conducting legal reviews at the national level does not perform the same role, nor have the same effect, as adoption of governance or regulatory measures at the international level. However, legal reviews can (and will inevitably) support the implementation of policy measures (of any legal status) adopted at the international level. This is particularly true while there is no agreed verification regime among States with respect to their military AI capabilities.
To make full use of this tool, States will need to tailor how they conduct legal reviews at the national level, to acknowledge both persistent challenges in conducting legal reviews and novel challenges associated with the characteristics of AI. At the international level, openness about how legal reviews of AI-enabled capabilities are carried out (if not the outcomes of specific reviews) should be considered in initiatives to govern the development and use of AI-enabled military capabilities.
***
Netta Goussac is a Senior Researcher in the SIPRI Governance of Artificial Intelligence Programme.
Rain Liivoja is a Professor of Law at the University of Queensland, and a Senior Fellow with the Lieber Institute for Law and Land Warfare at West Point.
The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.
Photo credit: Getty Images via Unsplash
AI Research
How AI Upended a Historic Antitrust Case Against Google

When the United States Justice Department first sued to break up Google alleging that it illegally monopolized online search in October 2020, there was little indication that one of the biggest factors in the case would be the rapid rise of a nascent technology.
On Tuesday, US District Court Judge Amit P. Mehta ordered Google to stop using exclusive agreements with third-parties to distribute its search engine, but stopped short of forcing the company to cease such payments altogether or to spin off its Chrome web browser.
The decision over legal remedies in the case deals a significant blow to US antitrust enforcers after securing a historic ruling declaring that Google maintained an illegal monopoly last year.
Notably, Mehta’s 226-page liability decision heavily emphasized the role that the ascendance of artificial intelligence, particularly generative AI (or “GenAI”) products like OpenAI’s ChatGPT, played in his assessment of the case.
“The emergence of GenAI changed the course of this case,” Mehta wrote in his 226-page ruling.
Tech Policy Press reviewed Mehta’s mentions of AI tools and companies and his characterization of Google’s position in this emerging market to see how his assessment of the technology impacted his deliberations. Here’s what we found:
A blip during liability discussion, a major talking point over remedies
Google’s competitive position in the booming yet still emerging AI market featured prominently in Mehta’s decision Tuesday, a contrast with his earlier ruling finding that Google monopolized online search. As CNBC reported, “OpenAI’s name comes up 30 times, Anthropic is named six times and Perplexity shows up in 24 instances. …ChatGPT was named 28 times in Tuesday’s filing.” Aside from OpenAI, the companies had not yet been founded when the case was filed.
Additionally, “AI” and “artificial intelligence” were mentioned 116 times combined, generative “artificial intelligence,” “generative AI” and “GenAI” were referenced 220 times, and “large language models” and “LLM” were mentioned 82 times, according to our review.
By contrast, Mehta barely made reference to AI’s rise in his decision declaring Google a monopoly last year. In that 286-page decision, Mehta mentioned ChatGPT only twice, and OpenAI, Perplexity and Anthropic not at all. “Generative artificial intelligence” was mentioned seven times, while “generative AI” and “GenAI” were not referenced at all, and “large language model” and “LLM” were referenced only a dozen times.
Mehta himself alluded to this discrepancy, noting that the tools played a far bigger role in the latter remedies phase of the trial than the earlier liability phase. While no AI competitors have yet to make gains on Google, Mehta wrote, the tools “may yet prove to be game changers.”
No witness at the liability trial testified that GenAI products posed a near-term threat to GSEs [general search engines]. The very first witness at the remedies hearing, by contrast, placed GenAI front and center as a nascent competitive threat. These remedies proceedings thus have been as much about promoting competition among GSEs as ensuring that Google’s dominance in search does not carry over into the GenAI space.
Projecting AI’s path in search
Mehta lamented that the case required the court to “gaze into a crystal ball and look to the future,” which he said was not “exactly a judge’s forte.” But he sought to do just that and paint a picture of how AI tools are now and could soon intersect with Google’s grip over search.
Mehta wrote that “tens of millions of people use GenAI chatbots, like ChatGPT, Perplexity, and Claude, to gather information that they previously sought through internet search,” and that experts expect generative AI tools to increasingly perform like search engines.
“Like a GSE, consumers can interact with AI chatbots by entering information seeking queries. … Thus, chatbots perform an information-retrieval function like that performed by GSEs,” he wrote, though he noted chatbots can also perform distinct functions, like generating images.
Their aim, he wrote, is to “transform chatbots into a kind of “[s]uper [a]ssistant” able to perform “‘any task’” asked by a user. “Search is a necessary component of this product vision,” he concluded.
Mehta also considered current evidence that the tools are already factoring into the online search landscape. While he noted that Google may now be using its own AI tools to strengthen its dominance over search, a key concern for US authorities, he also wrote that “GenAI products may be having some impact on GSE usage,” and that competitors are also looking to use AI tools to onboard users onto their products as “access points” for search queries. Mehta alludes to the vision shared by AI firms that one such access point may eventually be a “super assistant” that “would be able to help perform ‘any task’ requested by the user.”
A “highly competitive” AI market
In his discussion of the current generative AI market, Mehta described it as “highly competitive” with “numerous new market entrants” in recent years, including the Chinese firm DeepSeek and Elon Musk’s Grok, and wrote that Google is not exactly in pole position to dominate it.
“There is constant jockeying for a lead in quality among GenAI products and models … Today, Google’s models do not have a distinct advantage over others in factuality or other technical benchmarks.”
He listed Anthropic, Meta, Microsoft, OpenAI, Perplexity, xAI, and DuckDuckGo as other participants in the market, and noted that they “have access to a lot of capital” to compete.
Mehta also wrote that generative AI companies have “had some success” in striking their own distribution agreements with device manufacturers to place their products, including partnerships between OpenAI and Microsoft and Perplexity with Motorola.
This section echoed many of the points Google made in its defense. Last year, the company wrote in a blog post about the case that the court was evaluating a “highly dynamic” market. “Since the trial ended over a year ago, AI has already rapidly reshaped the industry, with new entrants and new ways of finding information, making it even more competitive,” Google wrote.
The company has said it plans to appeal the initial liability ruling finding that it maintained an illegal monopoly, while in a statement released following the decision DOJ leaders appeared to suggest they may appeal the remedies Mehta doled out this week.
Some solace for US enforcers
While Mehta’s decision was far less sweeping than US antitrust enforcers had hoped for, his remedies will impact Google’s relationship with its budding AI rivals.
Mehta ordered Google to cease exclusive distribution agreements and share some of the data it uses to power its search business, including with companies in the AI space.
Because their functionality only partially overlaps, GenAI chatbots have not eliminated the need for GSEs. … Nevertheless, the capacity “to fulfill a broad array of informational needs” constitutes a defining feature of both products, as Google implicitly acknowledges. … And it is that capacity that renders GenAI a potential threat to Google’s dominance in the market for general search services.
But Google’s seeming inability to significantly leverage its dominance in search to quickly boost its AI offerings appeared to be a major sticking point for Mehta in weighing tougher sanctions.
The evidence did not show, for instance, that Google’s GenAI product responses are superior to other GenAI offerings due to Google’s access to more user-interaction data. If anything, the evidence established otherwise: The GenAI product space is highly competitive, and Google’s Gemini app, for instance, does not have a distinct advantage over chatbots in factuality and other technical benchmarks.
Mehta did leave the door open that if the situation changes, the court could intervene more substantially. Market “realities give the court hope that Google will not simply outbid competitors for distribution if superior products emerge,” Mehta wrote. “The court is thus prepared to revisit a payment ban (or a lesser remedy) if competition is not substantially restored through the remedies the court does impose.” Presumably that determination would be informed by the work of the Technical Committee established by the court, which is set to function throughout the six-year term of the judgment.
AI Research
NSF announces up to $35 million to stand up AI research resource operations center

The National Science Foundation plans to award up to $35 million to establish an operations center for its National AI Research Resource, signaling a step toward the pilot becoming a more permanent program.
Despite bipartisan support for the NAIRR, Congress has yet to authorize a full-scale version of the resource designed to democratize access to tools needed for AI research. The newly announced solicitation indicates the project is taking steps to scale the project absent additional support.
“The NAIRR Operating Center solicitation marks a key step in the transition from the NAIRR Pilot to building a sustainable and scalable NAIRR program,” Katie Antypas, who leads NSF’s Office of Advanced Cyberinfrastructure, said in a statement included in the announcement.
She added that NSF looks forward to collaborating with partners in the private sector and other agencies, “whose contributions have been critical in demonstrating the innovation and scientific impact that comes when critical AI resources are made accessible to research and education communities across the country.”
The NAIRR began as a pilot in January 2024 as a resource for researchers to access computational data, AI models, software, and other tools that are needed for AI research. Since then, the public-private partnership pilot has supported over 490 projects in 49 states and Washington, per its website, and is supported by contributions from 14 federal agencies and 28 private sector partners.
As the pilot has moved forward, lawmakers have attempted to advance bipartisan legislation that would codify the NAIRR, but those bills have not passed. Previous statements from science and tech officials during the Biden administration made the case that formalization would be important as establishing NAIRR fully was expected to take a significant amount of funding.
In response to a FedScoop question about funding for the center, an NSF spokesperson said it’s covered by the agency’s normal appropriations.
NAIRR has remained a priority even as the Trump administration has sought to make changes to NSF awards, canceling hundreds of grants that were related to things like diversity, equity and inclusion (DEI) and environmental justice. President Donald Trump’s AI Action Plan, for example, included a recommendation for the NAIRR to “build the foundations for a lean and sustainable NAIRR operations capability.”
According to the solicitation, NSF will make an award of a maximum of $35 million for a period of up to five years for the operations center project. That award will be made to a single organization. That awardee would ultimately be responsible for establishing a “community-based organization,” including tasks such as establishing the operation framework, working with stakeholders, and coordinating with the current pilot project functions.
The awardee would also be eligible to expand their responsibilities and duties at a later date, depending on factors such as NAIRR’s priorities, the awardee’s performance and funding.
AI Research
Top AI Code Generation Tools of 2025 Revealed in Info-Tech Research Group’s Emotional Footprint Report
The recently published 2025 AI Code Generation Emotional Footprint report from Info-Tech Research Group highlights the top AI code generation solutions that help organizations streamline development and support innovation. The report’s insights are based on feedback from users on the global IT research and advisory firm’s SoftwareReviews platform.
TORONTO, Sept. 3, 2025 /PRNewswire/ – Info-Tech Research Group has published its 2025 AI Code Generation Emotional Footprint report, identifying the top-performing solutions in the market. Based on data from SoftwareReviews, a division of the global IT research and advisory firm, the newly published report highlights the five champions in AI-powered code generation tools.
AI code generation tools make coding easier by taking care of repetitive tasks. Instead of starting from scratch, developers get ready-made snippets, smoother workflows, and support built right into their IDEs and version control systems. With machine learning and natural language processing behind them, these tools reduce mistakes, speed up projects, and give developers more room to focus on creative problem solving and innovation.
Info-Tech’s Emotional Footprint measures high-level user sentiment. It aggregates emotional response ratings across 25 proactive questions, creating a powerful indicator of overall user feeling toward the vendor and product. The result is the Net Emotional Footprint, or NEF, a composite score that reflects the overall emotional tone of user feedback.
Data from 1,084 end-user reviews on Info-Tech’s SoftwareReviews platform was used to identify the top AI code generation tools for the 2025 Emotional Footprint report. The insights support organizations looking to streamline development, improve code quality, and scale their software delivery capabilities to drive innovation and business growth.
The 2025 AI Code Generation Tools – Champions are as follows:
- Visual Studio IntelliCode, +96 NEF, ranked high for delivering more than promised.
- ChatGPT 5, +94 NEF, ranked high for its effectiveness.
- GitHub Copilot, +94 NEF, ranked high for its transparency.
- Replit AI, +96 NEF, ranked high for its reliability.
- Amazon Q Developer, +94 NEF, ranked high for helping save time.
Analyst Insight:
“Organizations that adopt AI code generation tools gain a significant advantage in software delivery and innovation,” says Thomas Randall, a research director at Info-Tech Research Group. “These tools help developers focus on complex, high-value work, improve code quality, and reduce errors. Teams that delay adoption risk slower projects, lower-quality software, and missed opportunities to innovate and stay competitive.”
User assessments of software categories on SoftwareReviews provide an accurate and detailed view of the constantly changing market. Info-Tech’s reports are informed by the data from users and IT professionals who have intimate experience with the software throughout the procurement, implementation, and maintenance processes.
Read the full report: Best AI Code Generation Tools 2025
For more information about Info-Tech’s SoftwareReviews, the Data Quadrant, or the Emotional Footprint, or to access resources to support the software selection process, visit softwarereviews.com.
About Info-Tech Research Group
Info-Tech Research Group is one of the world’s leading research and advisory firms, proudly serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.
To learn more about Info-Tech’s divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software buying insights.
Media professionals can register for unrestricted access to research across IT, HR, and software, and hundreds of industry analysts through the firm’s Media Insiders program. To gain access, contact [email protected].
For information about Info-Tech Research Group or to access the latest research, visit infotech.com and connect via LinkedIn and X.
About SoftwareReviews
SoftwareReviews is a division of Info-Tech Research Group, a world-class technology research and advisory firm. SoftwareReviews empowers organizations with the best data, insights, and advice to improve the software buying and selling experience.
For buyers, SoftwareReviews’ proven software selection methodologies, customer insights, and technology advisors help maximize success with technology decisions. For providers, the firm helps build more effective marketing, product, and sales processes with expert analysts, how-to research, customer-centric marketing content, and comprehensive analysis of the buyer landscape.
SOURCE Info-Tech Research Group
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics