Connect with us

Tools & Platforms

UCR pioneers way to remove private data from AI models | UCR News

Published

on


A team of computer scientists at UC Riverside has developed a method to erase private and copyrighted data from artificial intelligence models—without needing access to the original training data.

This advance, detailed in a paper presented in July at the International Conference on Machine Learning in Vancouver, Canada, addresses a rising global concern about personal and copyrighted materials remaining in AI models indefinitely—and thus accessible to model users—despite efforts by the original creators to delete or guard their information with paywalls and passwords.

Ümit Yiğit Başaran, left. Başak Güler and Amit Roy-Chowdhury

The UCR innovation compels AI models to “forget” selected information while maintaining the models’ functionality with the remaining data. It’s a significant advancement that can amend models without having to re-make them with the voluminous original training data, which is costly and energy-intensive. The approach also enables the removal of private information from AI models even when the original training data is no longer available.

“In real-world situations, you can’t always go back and get the original data,” said Ümit Yiğit Başaran, a UCR electrical and computer engineering doctoral student and lead author of the study. “We’ve created a certified framework that works even when that data is no longer available.”

The need is pressing. Tech companies face new privacy laws, such as the European Union’s General Data Protection Regulation and California’s Consumer Privacy Act, which govern the security of personal data embedded in large-scale machine learning systems.

Moreover, The New York Times is suing OpenAI and Microsoft over the use of its many copyrighted articles to train Generative Pre-trained Transformer, or GPT, models.

AI models “learn” the patterns of words from a vast amount of texts scraped from the Internet. When queried, the models predict the most likely word combinations, generating natural-language responses to user prompts. Sometimes they generate near-verbatim reproductions of the training texts, allowing users to bypass the paywalls of the content creators.

The UC Riverside research team—comprising Başaran, professor Amit Roy-Chowdhury, and assistant professor Başak Güler—developed what they call a “source-free certified unlearning” method. The technique allows AI developers to remove targeted data by using a substitute, or “surrogate,” dataset that statistically resembles the original data.

The system adjusts model parameters and adds carefully calibrated random noise to ensure the targeted information is erased and cannot be reconstructed.

Their framework builds on a concept in AI optimization that efficiently approximates how a model would change if it had been retrained from scratch. The UCR team enhanced this approach with a new noise-calibration mechanism that compensates for discrepancies between the original and surrogate datasets. 

The researchers validated their method using both synthetic and real-world datasets and found it provided privacy guarantees close to those achieved with full retraining—yet required far less computing power.

The current work applies to simpler models—still widely used—but could eventually scale to complex systems like ChatGPT, said Roy-Chowdhury, the co-director of UCR’s Riverside Artificial Intelligence Research and Education (RAISE) Institute and a professor in the Marlan and Rosemary Bourns College of Engineering. 

Beyond regulatory compliance, the technique holds promise for media organizations, medical institutions, and others handling sensitive data embedded in AI models, the researchers said. It could also empower people to demand the removal of personal or copyrighted content from AI systems.

“People deserve to know their data can be erased from machine learning models—not just in theory, but in provable, practical ways,” Güler said.

The team’s next steps involve refining the method to work with more complex model types and datasets and building tools to make the technology accessible to AI developers worldwide.

The title of the paper is “A Certified Unlearning Approach without Access to Source Data.” It was done in collaboration with Sk Miraj Ahmed, a computational science research associate at the Brookhaven National Laboratory in Upton, NY, who received his doctoral degree at UCR. Both Roy-Chowdary and Güler are faculty members in the Department of Electrical and Computer Engineering with secondary appointments in the Department of Computer Science and Engineering.

 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Revealed: What our biggest companies worry about when it comes to AI – AFR

Published

on



Revealed: What our biggest companies worry about when it comes to AI  AFR



Source link

Continue Reading

Tools & Platforms

Google engineer releases free 400-page guide to agentic AI systems

Published

on


A Google distinguished engineer has published a comprehensive 400-page technical guide to building autonomous AI systems, offering detailed blueprints for creating sophisticated artificial intelligence agents. Antonio Gulli, Senior Director and Distinguished Engineer in Google’s CTO Office, announced Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems with a scheduled release date of December 3, 2025.

The publication addresses a critical gap in AI development methodology. According to Gulli, building effective agentic systems requires more than just a powerful language model—it demands structured architectural blueprints. “It’s about moving from raw capability to robust, real-world applications,” Gulli stated in the book’s introduction.

The guide presents 21 distinct agentic patterns that serve as fundamental building blocks for autonomous AI systems. These patterns range from foundational concepts such as Prompt Chaining and Tool Use to advanced implementations including Multi-Agent Collaboration and Self-Correction frameworks. Each pattern represents a reusable solution to common challenges encountered when building intelligent, goal-oriented systems.

Technical specifications detailed in the book cover multiple implementation frameworks. The guide utilizes three prominent development platforms: LangChain and its extension LangGraph for building complex operational sequences, CrewAI for orchestrating multiple agents, and the Google Agent Developer Kit for evaluation and deployment processes. This multi-framework approach ensures broad applicability across different technical environments.

The publication structure follows a practical methodology. Each chapter focuses on a single agentic pattern, providing pattern overviews, use cases, hands-on code examples, and key takeaways. According to the table of contents, Part One covers 103 pages of core execution patterns including Prompt Chaining, Routing, Parallelization, Reflection, Tool Use, Planning, and Multi-Agent systems.

Part Two addresses 61 pages of memory management and learning capabilities. This section explores Memory Management, Learning and Adaptation, Model Context Protocol (MCP), and Goal Setting frameworks. The technical depth continues through Parts Three and Four, covering 114 pages of advanced topics including Exception Handling, Human-in-the-Loop patterns, Knowledge Retrieval, and Safety implementations.

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.


Learn more

The book’s technical approach emphasizes practical implementation over theoretical discussion. According to the publication details, the guide includes executable code examples, architectural diagrams, and step-by-step implementation instructions. This hands-on methodology addresses the growing demand for actionable AI development resources in enterprise environments.

Industry validation for the guide emerged through social media discussions among AI practitioners. Multiple technology leaders shared positive assessments of the publication’s practical value. The book received recognition as a “#1 New Release in Probability & Statistics” on Amazon with a December 3, 2025 release date.

Gulli brings extensive technical credentials to the publication. His background includes over 30 years of relevant experience in AI, Search, and Cloud technologies. He holds a Ph.D. in Computer Science from the University of Pisa and has previously authored technical publications including “Deep Learning for Keras” across multiple editions and languages.

The economic context for agentic AI development shows significant market potential. Recent research published on PPC Land indicates Google Cloud projects the agentic AI market could reach $1 trillion by 2040, with 90% enterprise adoption expected. This projection reflects growing demand for autonomous AI systems capable of executing complex workflows with minimal human intervention.

The timing of Gulli’s publication coincides with increased industry focus on AI agent development. Major technology companies have recently released comprehensive AI agent guides, marking a shift toward more autonomous systems. Companies including Anthropic, OpenAI, and McKinsey have published complementary resources, though Gulli’s guide stands out for its comprehensive technical depth and practical implementation focus.

The book addresses critical challenges in AI agent reliability and safety. Traditional single-prompt interactions often prove insufficient for complex, multi-step tasks. Agentic patterns provide structured approaches to decomposing complex objectives into manageable components while maintaining coherence across extended workflows.

Pattern composition represents a key advancement outlined in the guide. The publication demonstrates how individual patterns combine to create sophisticated systems. For example, an autonomous research assistant might integrate Planning patterns for task decomposition, Tool Use for information gathering, Multi-Agent Collaboration for specialized analysis, and Reflection for quality assurance.

Memory Management patterns detailed in the book enable agents to maintain context across interactions while learning from experience. These capabilities distinguish true agentic systems from simple reactive models. The technical specifications include both short-term conversational context and long-term knowledge retention mechanisms.

Safety and alignment considerations receive dedicated coverage through specialized “Guardrails/Safety Patterns.” These frameworks address challenges of autonomous operation while maintaining alignment with intended objectives. The patterns include input validation, output filtering, human oversight integration, and graceful degradation capabilities.

The publication includes extensive technical documentation spanning 424 total pages. Appendices provide advanced prompting techniques, framework overviews, and implementation guidelines. A comprehensive glossary defines technical terms and concepts used throughout the guide.

Distribution of the guide follows open-access principles. Google has made the technical documentation publicly available through standard channels, enabling widespread practitioner access. This approach supports broader adoption of structured AI agent development methodologies across the industry.

Why this matters for marketing

The release of this comprehensive guide signals the maturation of agentic AI from experimental technology to practical implementation framework. For marketing professionals, these developments indicate significant opportunities for campaign automation and optimization capabilities that extend far beyond current programmatic advertising approaches.

The emergence of agentic AI capabilities in marketing contexts has already shown measurable impact, with AI search traffic converting at rates 23 times higher than traditional organic search visitors despite representing minimal traffic volume. This pattern suggests that AI-powered systems are fundamentally changing how users discover and interact with content.

Google’s recent introduction of automated calling features demonstrates practical agentic implementations in customer service contexts. The system autonomously contacts businesses to gather pricing and availability information on behalf of users, representing the type of goal-oriented behavior that Gulli’s patterns enable at scale.

The technical frameworks outlined in the guide provide marketing teams with structured approaches to building custom AI agents for campaign management, content optimization, and customer interaction automation. Rather than relying on black-box solutions, these patterns enable transparent, controllable implementations that align with specific business objectives.

Timeline

Summary

Who: Antonio Gulli, Senior Director and Distinguished Engineer in Google’s CTO Office, with over 30 years of experience in AI, Search, and Cloud technologies and a Ph.D. in Computer Science from the University of Pisa.

What: A comprehensive 400-page technical guide titled “Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems” that presents 21 distinct patterns for building autonomous AI agents, covering everything from basic prompt chaining to advanced multi-agent collaboration frameworks.

When: Announced with a scheduled release date of December 3, 2025, with the book being listed as a “#1 New Release in Probability & Statistics” on Amazon.

Where: Announced through multiple channels including social media and Amazon pre-orders, with Google making the technical documentation publicly available through standard distribution mechanisms.

Why: The guide addresses the critical gap between powerful language models and practical autonomous systems, providing structured architectural blueprints necessary for building reliable, goal-oriented AI agents that can operate with minimal human intervention in real-world applications.



Source link

Continue Reading

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | indystar.com
logo

indystar.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on indystar.com



Source link

Continue Reading

Trending