AI Research
The great convergence: Why your data’s past and future are colliding
For decades, a fundamental divide has shaped enterprise data strategy: the absolute separation between operational and analytical systems. On one side stood the digital engine of the company: the online transaction processing (OLTP) systems that manage inventory in real-time. On the other, the strategic brains: the online analytical processing (OLAP) platforms that sift through historical data to support planning and strategy. This divide, traditionally bridged by batch extract, transform, load (ETL), forced leaders to make decisions based on yesterday’s insights.
The availability of AI that can provide strategic insights in real-time is tearing down this wall. Forward-thinking organizations are building unified platforms that combine operational and analytical capabilities. This convergence enables real-time analysis of live data streams, allowing companies to replace reactive reporting with proactive decision-making that delivers immediate business value.
From reactive to proactive: Real-time value in action
The convergence of operational and analytical data transforms business decision-making from reactive to in-the-moment thinking. Instead of asking “what happened,” organizations can focus on “what’s happening now?” and “what can I influence next?”
For example:
- The North Face analyzed real-time search data to discover that customers were searching for a “midi parka,” a term absent from their product descriptions. By quickly renaming a product to match this trend, they saw a three-fold increase in conversions overnight.
- In logistics, Geotab is analyzing billions of daily data points from 4.6 million connected vehicles for real-time fleet optimization and driver safety.
- In financial services, Transparently.AI built a fraud detection platform that achieves 90% accuracy by analyzing transactions as they happen, not after the fact.
The technology making convergence a reality
The shift from siloed, lagging data to real-time, actionable intelligence is made possible by a new generation of cloud-native technologies. Together, they create a powerful data flywheel: a continuous loop where live operational data is analyzed for insights, which are then pushed back into business systems to guide action and improve operations. This self-reinforcing cycle is built on four key technologies:
Data federation: Federation allows an analytical platform to query data directly from operational databases, without moving or copying it. This zero-copy approach allows analysts to combine real-time transactional data with historical data to get a complete, up-to-the-second view of a business operations.
Real-time data streaming: Technologies like change data capture (CDC) can stream updates from operational systems to analytical platforms as they happen, without impacting performance. This ensures that analytical tools are always working with the freshest available data.
Unified storage layer: A modern data lakehouse stores information in open formats accessible to both analytical engines and transactional databases. This eliminates data duplication and allows a single dataset to support everything from advanced analytics and BI dashboards to operational decision-making.
Reverse ETL: Reverse ETL sends insights, such as customer scores or product recommendations, back into business systems like CRMs and marketing platforms. This puts analytics directly into the hands of frontline teams to drive action in real time.
The ultimate catalyst: Giving AI a memory
Converged operational and analytic data systems lay the groundwork for real-time intelligence, but the next wave of business impact will come from autonomous agents that can make and act on decisions – not just support them. However, today’s large language models have a fundamental limitation: They lack business context and are, simply put, forgetful. Without an external brain, every interaction starts from a blank slate.
This is where connecting agents with data across analytical and operational platforms becomes critical. To build truly useful agents, we must give them two types of memory:
1. Semantic memory: This is the agent’s deep, contextual library of knowledge about your business, products, and industry. To improve AI accuracy and reduce hallucinations, modern data platforms now support retrieval-augmented generation (RAG), a technique that lets AI models ground responses in real business data, not just their generic training patterns. This capability relies on vector embeddings and vector search, which finds relevant content by comparing the meaning of queries and data, rather than by exact keywords. By using this approach, AI systems can retrieve the right information from enterprise data platforms, multimodal datasets (e.g., documents), knowledge bases, or even live operational data.
2. Transactional memory: For personalization and reliability, agents need to remember specific interactions and maintain state. This includes both episodic memory (a log of conversations and user preferences, so the agent can carry on conversations that feel continuous, not like a reset each time) and state management (tracking progress through complex tasks). If interrupted, an agent uses this stateful memory to pick up where it left off.
Supporting this memory architecture requires a new generation of data infrastructure: systems that handle both structured and unstructured data, offer strong consistency, and persist state reliably. Without this foundation, AI agents will remain clever but forgetful, unable to reason or adapt in meaningful ways.
The CIO’s new playbook: Architecting the intelligent enterprise
For CIOs, this convergence means a fundamental shift from managing siloed systems to architecting a unified enterprise platform where a real-time data flywheel can spin. This requires building a resilient data foundation that can deliver immediate business value today while also supporting the semantic and transactional memory that tomorrow’s autonomous AI agents require. By solving AI’s inherent “memory problem,” this approach paves the way for truly intelligent systems that can reason, plan, and act with full business context, driving unprecedented innovation.
At Google Cloud, we’ve seen these patterns emerge across industries, from retail to travel to finance. Our platform is designed to support this shift: open by design but also unified and built for scale. It is engineered to converge operational and analytical data so organizations can move from insight to action, without delay.
Learn more by reading the data leaders’ best practice guide for data and AI.
AI Research
EU Publishes Final AI Code of Practice to Guide AI Companies
The European Commission said Thursday (July 10) that it published the final version of a voluntary framework designed to help artificial intelligence companies comply with the European Union’s AI Act.
AI Research
Researchers develop AI model to generate global realistic rainfall maps
Severe weather events, such as heavy rainfall, are on the rise worldwide. Reliable assessments of these events can save lives and protect property. Researchers at the Karlsruhe Institute of Technology (KIT) have developed a new method that uses artificial intelligence (AI) to convert low-resolution global weather data into high-resolution precipitation maps. The method is fast, efficient, and independent of location. Their findings have been published in npj Climate and Atmospheric Science.
“Heavy rainfall and flooding are much more common in many regions of the world than they were just a few decades ago,” said Dr. Christian Chwala, an expert on hydrometeorology and machine learning at the Institute of Meteorology and Climate Research (IMK-IFU), KIT’s Campus Alpin in the German town of Garmisch-Partenkirchen. “But until now the data needed for reliable regional assessments of such extreme events was missing for many locations.”
His research team addresses this problem with a new AI that can generate precise global precipitation maps from low-resolution information. The result is a unique tool for the analysis and assessment of extreme weather, even for regions with poor data coverage, such as the Global South.
For their method, the researchers use historical data from weather models that describe global precipitation at hourly intervals with a spatial resolution of about 24 kilometers. Not only was their generative AI model (spateGEN-ERA5) trained with this data, it also learned (from high-resolution weather radar measurements made in Germany) how precipitation patterns and extreme events correlate at different scales, from coarse to fine.
“Our AI model doesn’t merely create a more sharply focused version of the input data, it generates multiple physically plausible, high-resolution precipitation maps,” said Luca Glawion of IMK-IFU, who developed the model while working on his doctoral thesis in the SCENIC research project. “Details at a resolution of 2 kilometers and 10 minutes become visible. The model also provides information about the statistical uncertainty of the results, which is especially relevant when modeling regionalized heavy rainfall events.”
He also noted that validation with weather radar data from the United States and Australia showed that the method can be applied to entirely different climatic conditions.
Correctly assessing flood risks worldwide
With their method’s global applicability, the researchers offer new possibilities for better assessment of regional climate risks. “It’s the especially vulnerable regions that often lack the resources for detailed weather observations,” said Dr. Julius Polz of IMK-IFU, who was also involved in the model’s development.
“Our approach will enable us to make much more reliable assessments of where heavy rainfall and floods are likely to occur, even in such regions with poor data coverage.” Not only can the new AI method contribute to disaster control in emergencies, it can also help with the implementation of more effective long-term preventive measures such as flood control.
More information:
Luca Glawion et al, Global spatio-temporal ERA5 precipitation downscaling to km and sub-hourly scale using generative AI, npj Climate and Atmospheric Science (2025). DOI: 10.1038/s41612-025-01103-y
Provided by
Karlsruhe Institute of Technology
Citation:
Researchers develop AI model to generate global realistic rainfall maps (2025, July 10)
retrieved 10 July 2025
from https://phys.org/news/2025-07-ai-generate-global-realistic-rainfall.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
AI Research
Musk unveils Grok 4 AI update after chatbot posted antisemitic remarks
Elon Musk’s artificial intelligence chatbot, Grok, received a major update.
Musk introduced Grok 4 during a livestream on X late Wednesday, calling it “the smartest AI in the world.” He praised the chatbot’s capabilities, saying it is smarter than “almost all graduate students in all disciplines, simultaneously.”
Introducing Grok 4, the world’s most powerful AI model.
Watch the livestream with @elonmusk and the @xAI team now. https://t.co/Mjt6w21qwd
— Engineering (@XEng) July 10, 2025
“Grok 4 is at the point where it essentially never gets math/physics exam questions wrong, unless they are skillfully adversarial,” Musk said. “It can identify errors or ambiguities in questions, then fix the error in the question or answer each variant of an ambiguous question.”
RELATED STORY | Musk’s AI company scrubs inappropriate posts after Grok chatbot makes antisemitic comments
Musk, who also owns Tesla, said in a separate social media post that Grok will be integrated into the electric vehicles as early as next week.
Grok 4’s release came just one day after the earlier model, Grok 3, shared several controversial posts, including some that praised Adolf Hitler.
In a statement, xAI, the company behind Grok, said it is actively working to remove hate speech from the platform and took swift action to update the model.
The controversial posts have since been deleted.
RELATED STORY | X CEO Linda Yaccarino leaves social media platform after 2 years
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle