New AI can generate 51 different weather forecasts simultaneously to show all possible scenarios.
The new model performs better than physics-based models for many measures, including surface temperature, with improvements of up to 20 percent.
The system creates forecasts over 10 times faster than physics-based systems while reducing energy consumption by approximately 1,000 times.
Ensemble system goes operational
The European Centre for Medium-Range Weather Forecasts (ECMWF) has taken the ensemble version of the Artificial Intelligence Forecasting System (AIFS) into operations on July 1, 2025. The system runs side by side with the traditional physics-based Integrated Forecasting System (IFS).
The ensemble version is called AIFS ENS and consists of 51 different forecasts with slight variations at any given time. This provides users with the full range of possible scenarios. The system comes after the launch of a first operational version that runs a single forecast at a time, called AIFS Single, at the end of February.
Despite the accuracy of AIFS Single, there is much more value for users if they can access the full range of possible scenarios.
Outperforms physics-based models
The new ensemble model outperforms state-of-the-art physics-based models for many measures, including surface temperature, with gains of up to 20 percent. Currently, it operates at a lower resolution (31 km) than the physics-based ensemble system (9 km), which remains indispensable for high-resolution fields and coupled Earth-system processes.
ECMWF is therefore also exploring hybrid systems that leverage the strengths of both approaches.
Dramatically faster and more energy efficient
AIFS ENS relies on physics-based data assimilation to generate the initial conditions. However, the system can generate forecasts over 10 times faster than the physics-based forecasting system, while reducing energy consumption by approximately 1,000 times.
The high-accuracy ensemble model complements ECMWF’s service portfolio by using the opportunities made available by machine learning and artificial intelligence.
Open source and continuous development
ECMWF is leveraging the potential of what AI and machine learning can do for weather science with this latest model. This is part of the organization’s co-development of the award-winning Anemoi framework with many of its Member States, which provides an open-source framework for training AI forecasting systems, including the AIFS.
TransHumanity Ltd., a spinout from Loughborough University, has secured approximately ÂŁ400,000 in pre-seed investment. The round was led by SFC Capital, the UK’s most active seed-stage investor, with additional investment from Silicon Valley-based Plug and Play.
TransHumanity’s vision is to empower faster, smarter human decisions by transforming data into accessible intelligence using large language model based agentic AI.Â
Agentic AI refers to artificial intelligence systems that collaborate with people to reach specific goals, understanding and responding in plain English. These systems use AI âagentsâ â models that can gather information, make suggestions, and carry out tasks in real time â helping people solve problems more quickly and effectively.
TransHumanityâs first product, AptIq, is designed to help transport authorities quickly analyse transport data and models, turning days of analysis into seconds.Â
By simply asking questions in plain English, users can gain instant insights to support key initiatives like congestion reduction, road safety, creation of business cases and net-zero targets.
Dr Haitao He, Co-founder and Director of TransHumanity, said: “I am proud to see my rigorous research translated into trusted real-world AI innovation for the transport sector. With this investment, we can now realise my Future Leaders Fellowship vision, scaling a technology that empowers authorities across the UK to deliver integrated, net-zero transport.”
Developed from rigorous research by Dr Haitao He, a UKRI Future Leaders Fellow in Transport AI at Loughborough University, AptIq, previously known as TraffEase, has already garnered significant recognition.Â
The technology was named a Top 10 finalist for the 2024 Manchester Prize for AI innovation and was recently highlighted as one of the Top 40 UK tech start-ups at London Tech Week by the UK Department for Business and Trade.
Adam Beveridge, Investment Principal at SFC Capital, said: “We are excited to back TransHumanity. The combination of cutting-edge research, a proven founding team, clear market demand, and positive societal impact makes this exactly the kind of high-growth venture we are committed to supporting.”
AptIq is currently in a test deployment with Nottingham City Council and Transport for Greater Manchester, with plans to expand to other city, regional, and national authorities across the UK within the next 12 months.
With a product roadmap that includes diverse data sources, advanced analytics and giving the user full control over the AI tool when required, interest from the transport sector is already high. Professor Nick Jennings, Vice-Chancellor and President of Loughborough University, noted: “I am delighted to see TransHumanity fast-tracked from lab to investment-ready spinout.
This journey was accelerated by TransHumanityâs selection as a finalist in the prestigious Manchester Prize and shows whatâs possible when the Universityâs ambition aligns with national innovation policy.”
An oversimplified approach I have taken in the past to explain wisdom is to share that âWe donât know what we donât know until we know it.â This absolutely applies to the fast-moving AI space, where unknowingly introducing legal and compliance risk through an organizationâs use of AI is a top concern among IT leaders.Â
Weâre now building systems that learn and evolve on their own, and that raises new questions along with new kinds of risk affecting contracts, compliance, and brand trust.
At Broadcom, weâve adopted what Iâd call a thoughtful âmove smart and then fastââ approach. Every AI use case requires sign-off from both our legal and information security teams. Some folks may complain, saying it slows them down. But if youâre moving fast with AI and putting sensitive data at risk, youâre also inviting trouble if you donât also move smart.
Here are seven things Iâve learned about collaborating with legal teams on AI projects.
1. Partner with Legal Early On
Donât wait until the AI service is built to bring legal in. Thereâs always the risk that choices you make about data, architecture, and system behavior can create regulatory headaches or break contracts later on.
Besides, legal doesnât need every answer on day one. What they do need is visibility into the gray areas. What data are you using and producing? How does the model make decisions? Could those decisions shift over time? Walk them through what youâre building and flag the parts that still need figuring out.
2. Document Your Decisions as You Go
AI projects move fast with teams needing to make dozens of early decisions on everything from data sources to training logic. So, itâs only natural that a few months later, chances are no one remembers why those choices were made. Then someone from compliance shows up with questions about those choices, and youâve got nothing to point to.
To avoid that situation, keep a simple log as you work. Then, should a subsequent audit or inquiry occur, youâll have something solid to help answer any questions.
3. Build Systems You Can Explain
Legal teams need to understand your system so they can explain it to regulators, procurement officers, or internal risk reviewers. If they can’t, thereâs the risk that your project could stall or even fail after it ships.
Iâve seen teams consume SaaS-based AI services without realizing the provider could swap out a backend AI model without their knowledge. If that leads to changes in the systemâs behavior behind the scenes, it could redirect your data in ways you didnât intend. Thatâs one reason why youâve got to know your AI supply chain, top to bottom. Ensure that services you build or consume have end-to-end auditability of the AI software supply chain. Legal canât defend a system if they donât understand how it works.
4. Watch Out for Shadow AI
Any engineer can subscribe to an AI service and accept the providerâs terms without knowing they donât have the authority to do that on behalf of the company.
That exposes the organization to major risk. An engineer might accidentally agree to data-sharing terms that violate regulatory restrictions or expose sensitive customer data to a third party.
And itâs not just deliberate use anymore. Run a search in Google and youâre already getting AI output. Itâs everywhere. The best way to avoid this is by building a culture where employees are aware of the legal boundaries. You can give teams a safe place to experiment, but at the same time, make sure you know what tools they’re using and what data they’re touching.
5. Help Legal Navigate Contract Language
AI systems get tangled in contract language; there are ownership rights, retraining rules, model drift, and more. Most engineers arenât trained to spot those issues, but weâre the ones who understand how the systems behave.
Thatâs another reason why youâve got to know your AI supply chain, top to bottom. In this case, when legal needs our help in reviewing vendor or customer agreements to put the contractual language into the appropriate technical context. What happens when the model changes? How are sensitive data sets safeguarded from being indexed or accessed via AI agents such as those that use Model Context Protocol (MCP)? We can translate the technical behavior into simple Englishâand that goes a long way toward helping the lawyers write better contracts.
6. Design with Auditability in Mind
AI is developing rapidly, with legal frameworks, regulatory requirements, and customer expectations evolving to keep pace. You need to be prepared for what might come next.Â
Can you explain where your training data came from? Can you show how the model was tested for bias? Can you justify how it works? If someone from a regulatory body walked in tomorrow, would you be ready?
Design with auditability in mind. Especially when AI agents are chained together, you need to be able to prove that identity and access controls are enforced end-to-end.Â
7. Handle Customer Data with Care
We donât get to make decisions on behalf of our customers about how their data gets used. Itâs their data. And when itâs private, it shouldnât be fed to a model. Period.Â
Youâve got to be disciplined about what data gets ingested. If your AI tool indexes everything by default, that can get messy fast. Are you touching private logs or passing anything to a hosted model without realizing it? Support teams might need access to diagnostic logs but that doesnât mean third-party models should touch them. Tools are rapidly evolving that can generate comparable synthetic data devoid of any customer private data that could help with support use cases for example, but these tools and techniques should be fully vetted with your legal and CISO organizations prior to using them.Â
The Reality
The engineering ethos is to move fast. But since safety and trust are on the line, you need to move smart, which means it’s okay if things take a little longer. The extra steps are worth it when they help protect your customers and your company.
Nobody has this all figured out. So ask questions by talking to people whoâve handled this kind of work before. The goal isnât perfectionâitâs to make smart, careful progress. For enterprises, the AI race isnât a question of âWhoâs best?â but rather âWhoâs leveraging AI safely to drive the best business outcomes.âÂ
Progress Software, a company offering artificial intelligence-powered digital experience and infrastructure software, has launched Progress Federal Solutions, a wholly owned subsidiary that aims to deliver AI-powered technologies to the federal, defense and public sectors.
Progress Federal Solutions to Boost Digital Transformation
The company said Monday the new subsidiary, announced during the Progress Data Platform Summit at the International Spy Museum in Washington, D.C., is intended to fast-track federal agenciesâ digital modernization efforts, meet compliance requirements, and advance AI and data initiatives. The subsidiary leverages MarkLogicâs data management and integration expertise, a platform that Progress Software acquired in 2023.
Progress Federal Solutions functions independently but will offer the companyâs full technology portfolio, including Progress Data Platform, Progress Sitefinity, Progress Chef, Progress LoadMaster and Progress MOVEit. These will be available to the public sector through Carahsoft Technologyâs reseller partners and contract vehicles.
Remarks From Progress Federal Solutions, Carahsoft ExecutivesÂ
âFederal and defense agencies are embracing data-centric strategies and modernizing legacy systems at a faster pace than ever. Thatâs why we focus on enabling data-driven decision-making, faster time to value and measurable ROI,â said Cori Moore, president of Progress Federal Solutions.
âProgress is a trusted provider of AI-enabled solutions that address complex data, infrastructure and digital experience needs. Their technologies empower government agencies to build high-impact applications, automate operations and scale securely to meet program goals,â said Michael Shrader, vice president of intelligence and innovative solutions at Carahsoft.