Connect with us

AI Research

The great convergence: Why your data’s past and future are colliding

Published

on


For decades, a fundamental divide has shaped enterprise data strategy: the absolute separation between operational and analytical systems. On one side stood the digital engine of the company: the online transaction processing (OLTP) systems that manage inventory in real-time. On the other, the strategic brains: the online analytical processing (OLAP) platforms that sift through historical data to support planning and strategy. This divide, traditionally bridged by batch extract, transform, load (ETL), forced leaders to make decisions based on yesterday’s insights.

The availability of AI that can provide strategic insights in real-time is tearing down this wall. Forward-thinking organizations are building unified platforms that combine operational and analytical capabilities. This convergence enables real-time analysis of live data streams, allowing companies to replace reactive reporting with proactive decision-making that delivers immediate business value.

From reactive to proactive: Real-time value in action

The convergence of operational and analytical data transforms business decision-making from reactive to in-the-moment thinking. Instead of asking “what happened,” organizations can focus on “what’s happening now?” and “what can I influence next?”

For example:

  • The North Face analyzed real-time search data to discover that customers were searching for a “midi parka,” a term absent from their product descriptions. By quickly renaming a product to match this trend, they saw a three-fold increase in conversions overnight.
  • In logistics, Geotab is analyzing billions of daily data points from 4.6 million connected vehicles for real-time fleet optimization and driver safety.
  • In financial services, Transparently.AI built a fraud detection platform that achieves 90% accuracy by analyzing transactions as they happen, not after the fact.

The technology making convergence a reality

The shift from siloed, lagging data to real-time, actionable intelligence is made possible by a new generation of cloud-native technologies. Together, they create a powerful data flywheel: a continuous loop where live operational data is analyzed for insights, which are then pushed back into business systems to guide action and improve operations. This self-reinforcing cycle is built on four key technologies:

Data federation: Federation allows an analytical platform to query data directly from operational databases, without moving or copying it. This zero-copy approach allows analysts to combine real-time transactional data with historical data to get a complete, up-to-the-second view of a business operations.

Real-time data streaming: Technologies like change data capture (CDC) can stream updates from operational systems to analytical platforms as they happen, without impacting performance. This ensures that analytical tools are always working with the freshest available data.

Unified storage layer: A modern data lakehouse stores information in open formats accessible to both analytical engines and transactional databases. This eliminates data duplication and allows a single dataset to support everything from advanced analytics and BI dashboards to operational decision-making.

Reverse ETL: Reverse ETL sends insights, such as customer scores or product recommendations, back into business systems like CRMs and marketing platforms. This puts analytics directly into the hands of frontline teams to drive action in real time.

The ultimate catalyst: Giving AI a memory

Converged operational and analytic data systems lay the groundwork for real-time intelligence, but the next wave of business impact will come from autonomous agents that can make and act on decisions – not just support them. However, today’s large language models have a fundamental limitation: They lack business context and are, simply put, forgetful. Without an external brain, every interaction starts from a blank slate.

This is where connecting agents with data across analytical and operational platforms becomes critical. To build truly useful agents, we must give them two types of memory:

1. Semantic memory: This is the agent’s deep, contextual library of knowledge about your business, products, and industry. To improve AI accuracy and reduce hallucinations, modern data platforms now support retrieval-augmented generation (RAG), a technique that lets AI models ground responses in real business data, not just their generic training patterns. This capability relies on vector embeddings and vector search, which finds relevant content by comparing the meaning of queries and data, rather than by exact keywords. By using this approach, AI systems can retrieve the right information from enterprise data platforms, multimodal datasets (e.g., documents), knowledge bases, or even live operational data.

2. Transactional memory: For personalization and reliability, agents need to remember specific interactions and maintain state. This includes both episodic memory (a log of conversations and user preferences, so the agent can carry on conversations that feel continuous, not like a reset each time) and state management (tracking progress through complex tasks). If interrupted, an agent uses this stateful memory to pick up where it left off.

Supporting this memory architecture requires a new generation of data infrastructure: systems that handle both structured and unstructured data, offer strong consistency, and persist state reliably. Without this foundation, AI agents will remain clever but forgetful, unable to reason or adapt in meaningful ways.

The CIO’s new playbook: Architecting the intelligent enterprise

For CIOs, this convergence means a fundamental shift from managing siloed systems to architecting a unified enterprise platform where a real-time data flywheel can spin. This requires building a resilient data foundation that can deliver immediate business value today while also supporting the semantic and transactional memory that tomorrow’s autonomous AI agents require. By solving AI’s inherent “memory problem,” this approach paves the way for truly intelligent systems that can reason, plan, and act with full business context, driving unprecedented innovation.

At Google Cloud, we’ve seen these patterns emerge across industries, from retail to travel to finance. Our platform is designed to support this shift: open by design but also unified and built for scale. It is engineered to converge operational and analytical data so organizations can move from insight to action, without delay.

Learn more by reading the data leaders’ best practice guide for data and AI.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

EU Publishes Final AI Code of Practice to Guide AI Companies

Published

on

By


The European Commission said Thursday (July 10) that it published the final version of a voluntary framework designed to help artificial intelligence companies comply with the European Union’s AI Act.

The General-Purpose AI Code of Practice seeks to clarify legal obligations under the act for providers of general-purpose AI models such as ChatGPT, especially those posing systemic risks like ones that help fraudsters develop chemical and biological weapons.

The code’s publication “marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” Henna Virkkunen, executive vice president for tech sovereignty, security and democracy for the commission, which is the EU’s executive arm, said in a statement.

The code was developed by 13 independent experts after hearing from 1,000 stakeholders, which included AI developers, industry organizations, academics, civil society organizations and representatives of EU member states, according to a Thursday (July 10) press release. Observers from global public agencies also participated.

The EU AI Act, which was approved in 2024, is the first comprehensive legal framework governing AI. It aims to ensure that AI systems used in the EU are safe and transparent, as well as respectful of fundamental human rights.

The act classifies AI applications into risk categories — unacceptable, high, limited and minimal — and imposes obligations accordingly. Any AI company whose services are used by EU residents must comply with the act. Fines can go up to 7% of global annual revenue.

The code is voluntary, but AI model companies who sign on will benefit from lower administrative burdens and greater legal certainty, according to the commission. The next step is for the EU’s 27 member states and the commission to endorse it.

Read also: European Commission Says It Won’t Delay Implementation of AI Act

Inside the Code of Practice

The code is structured into three core chapters: Transparency; Copyright; and Safety and Security.

The Transparency chapter includes a model documentation form, described by the commission as “a user-friendly” tool to help companies demonstrate compliance with transparency requirements.

The Copyright chapter offers “practical solutions to meet the AI Act’s obligation to put in place a policy to comply with EU copyright law.”

The Safety and Security chapter, aimed at the most advanced systems with systemic risk, outlines “concrete state-of-the-art practices for managing systemic risks.”

The drafting process began with a plenary session in September 2024 and proceeded through multiple working group meetings, virtual drafting rounds and provider workshops.

The code takes effect Aug. 2, but the commission’s AI Office will enforce the rules on new AI models after one year and on existing models after two years.

A spokesperson for OpenAI told The Wall Street Journal that the company is reviewing the code to decide whether to sign it. A Google spokesperson said the company would also review the code.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:



Source link

Continue Reading

AI Research

Researchers develop AI model to generate global realistic rainfall maps

Published

on


Working from low-resolution global precipitation data, the spateGAN-ERA5 AI model generates high-resolution fields for the analysis of heavy rainfall events. Credit: Christian Chwala, KIT

Severe weather events, such as heavy rainfall, are on the rise worldwide. Reliable assessments of these events can save lives and protect property. Researchers at the Karlsruhe Institute of Technology (KIT) have developed a new method that uses artificial intelligence (AI) to convert low-resolution global weather data into high-resolution precipitation maps. The method is fast, efficient, and independent of location. Their findings have been published in npj Climate and Atmospheric Science.

“Heavy rainfall and flooding are much more common in many regions of the world than they were just a few decades ago,” said Dr. Christian Chwala, an expert on hydrometeorology and machine learning at the Institute of Meteorology and Climate Research (IMK-IFU), KIT’s Campus Alpin in the German town of Garmisch-Partenkirchen. “But until now the data needed for reliable regional assessments of such extreme events was missing for many locations.”

His research team addresses this problem with a new AI that can generate precise global precipitation maps from low-resolution information. The result is a unique tool for the analysis and assessment of extreme weather, even for regions with poor data coverage, such as the Global South.

For their method, the researchers use from that describe global precipitation at hourly intervals with a spatial resolution of about 24 kilometers. Not only was their generative AI model (spateGEN-ERA5) trained with this data, it also learned (from high-resolution weather radar measurements made in Germany) how precipitation patterns and extreme events correlate at different scales, from coarse to fine.

“Our AI model doesn’t merely create a more sharply focused version of the input data, it generates multiple physically plausible, high-resolution maps,” said Luca Glawion of IMK-IFU, who developed the model while working on his doctoral thesis in the SCENIC research project. “Details at a resolution of 2 kilometers and 10 minutes become visible. The model also provides information about the statistical uncertainty of the results, which is especially relevant when modeling regionalized events.”

He also noted that validation with weather radar data from the United States and Australia showed that the method can be applied to entirely different climatic conditions.

Correctly assessing flood risks worldwide

With their method’s global applicability, the researchers offer new possibilities for better assessment of regional climate risks. “It’s the especially vulnerable regions that often lack the resources for detailed weather observations,” said Dr. Julius Polz of IMK-IFU, who was also involved in the model’s development.

“Our approach will enable us to make much more reliable assessments of where heavy rainfall and floods are likely to occur, even in such regions with poor data coverage.” Not only can the new AI method contribute to disaster control in emergencies, it can also help with the implementation of more effective long-term preventive measures such as flood control.

More information:
Luca Glawion et al, Global spatio-temporal ERA5 precipitation downscaling to km and sub-hourly scale using generative AI, npj Climate and Atmospheric Science (2025). DOI: 10.1038/s41612-025-01103-y

Citation:
Researchers develop AI model to generate global realistic rainfall maps (2025, July 10)
retrieved 10 July 2025
from https://phys.org/news/2025-07-ai-generate-global-realistic-rainfall.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

AI Research

Musk unveils Grok 4 AI update after chatbot posted antisemitic remarks

Published

on


Elon Musk’s artificial intelligence chatbot, Grok, received a major update.

Musk introduced Grok 4 during a livestream on X late Wednesday, calling it “the smartest AI in the world.” He praised the chatbot’s capabilities, saying it is smarter than “almost all graduate students in all disciplines, simultaneously.”

“Grok 4 is at the point where it essentially never gets math/physics exam questions wrong, unless they are skillfully adversarial,” Musk said. “It can identify errors or ambiguities in questions, then fix the error in the question or answer each variant of an ambiguous question.”

RELATED STORY | Musk’s AI company scrubs inappropriate posts after Grok chatbot makes antisemitic comments

Musk, who also owns Tesla, said in a separate social media post that Grok will be integrated into the electric vehicles as early as next week.

Grok 4’s release came just one day after the earlier model, Grok 3, shared several controversial posts, including some that praised Adolf Hitler.

In a statement, xAI, the company behind Grok, said it is actively working to remove hate speech from the platform and took swift action to update the model.

The controversial posts have since been deleted.

RELATED STORY | X CEO Linda Yaccarino leaves social media platform after 2 years





Source link

Continue Reading

Trending