Connect with us

Funding & Business

AIceberg Secures $10 Million in Seed Funding to Transform AI from a Business Risk to a Competitive Advantage

Published

on


AI innovator’s new solution AIceberg enables organizations to safely and securely unlock the power of generative AI

NEW YORK, March 6, 2025 /PRNewswire/ — AIceberg, a leading innovator of AI Trust; safety, security, and compliance technology, today announced $10 million in Seed funding and the launch of their AI trust platform that provides enterprise-grade security with real-time, automated validation of all AI application traffic, from speech and text to images and source code.

The financing, led by SYN Ventures and Sprout & Oak, further establishes the company’s leadership position in pioneering AI transparency and compliance for enterprise and public entities.

“Organizations adopting generative and agentic AI face critical challenges in ensuring safety, security, and compliance,” said Alex Schlager, CEO at AIceberg. “Additionally, many are operating with a false sense of security because while AI TRiSM solutions powered by LLMs seem convenient, using LLMs to safeguard LLMs introduces systemic risks and architectural limitations that undermine their effectiveness. We are closing the security gap with AIceberg. Purpose-built, non-generative models to detect risk signals support safe AI adoption, AIceberg works independently of AI applications, using the content of input and output to detect and eliminate risks and power safe, secure, compliant use of generative models and agentic workflows across the enterprise. We thank our customers and investors for their support as we enter our next phase of growth and continue to innovate to ensure security and safety.”

AIceberg unlocks the power of generative and agentic AI and eliminates the risks by working as an AI firewall and gateway that monitors user prompts and model/agent responses for risk signals and enforcing security and organizational policies. As a result, organizations can mitigate risks and adapt swiftly to emerging threats with real-time responses, agentic action controls, and customized security policies. Additionally, with AIceberg’s advanced threat detection, security posture is always up-to-date for the latest attack vectors with real-time detection and mitigation against AI cybersecurity threats such as data leaks and unauthorized access. 

Key features and benefits include:

  • Guardrails for safety: Ensure only use case relevant AI interactions are permitted to prevent unsanctioned, unsuitable, or illegal content, and ensure privacy and automatically redact personal and sensitive information
  • Up-to-date security posture: Detect common AI cybersecurity attack vectors like prompt injection, prompt leaking, or jailbreaking, and perform sophisticated security analysis for agentic workflows
  • Compliance, transparency, and auditability: Powered by explainable, non-generative AI models, gain maximum accuracy and auditable from beginning to end
  • Enterprise observability across all AI interactions: Understand common prompts, objectives, and intentions to improve user experience and gain valuable business intelligence from communication mining of prompt/response pairings

“The public and private sectors are eager to gain a competitive advantage by integrating AI tools, but without appropriate guardrails in place, these efforts can open them to great risk,” said Jay Leek, Managing Partner and Co-founder at SYN Ventures. “AIceberg’s deep domain experience and unmatched technological capabilities have quickly positioned them as the industry leader in developing solutions to make safe AI systems. We are thrilled to invest in the team.”

“AIceberg has an exceptional leadership and research team with deep expertise in AI, cybersecurity, and enterprise risk management,” said MJ Ramachandran, Partner at Sprout & Oak. “We partnered with AIceberg from its earliest days, recognizing the urgent need for enterprises—especially in regulated industries—to adopt generative and agentic AI safely and transparently. Our decision to incubate and invest early was driven by AIceberg’s pioneering approach to AI security, compliance, and explainability. We’re excited to continue supporting their mission to make AI adoption both powerful and responsible.”

To learn more about AIceberg, visit aiceberg.ai.

About AIceberg
AIceberg is pioneering the safe, secure, compliant adoption of artificial intelligence for government and enterprise clients. With expertise in machine learning, cybersecurity, and automation, AIceberg is dedicated to empowering enterprises on their AI journey—from day zero to scale—unlocking transformative value at every stage. Learn more at aiceberg.ai.

For more information or media requests, contact: [email protected]

About SYN Ventures
SYN Ventures is a venture capital firm focused on investing in disruptive and innovative security companies in the cybersecurity, industrial security, national defense, privacy, regulatory compliance, and data governance industries. The firm’s dedicated security team of former CISOs, CEOs and Founders has a proven track record with over 250 years of security investing and operational experience. SYN also has a highly distinguished network of seasoned security advisors and CISOs. For more information on SYN Ventures, please visit: https://www.synventures.com/.

About Sprout & Oak
Sprout & Oak is an early-stage venture capital firm dedicated to backing founders building transformative companies in an AI-first world. We invest at the earliest stages, providing capital, hands-on guidance, and incubation support to help turn bold ideas into generational companies. With a team of experienced operators and investors, we bring deep domain expertise and a powerful network of blue-chip industry leaders to accelerate growth and drive long-term success.

Media Contact:
Danielle Ostrovsky
Hi-Touch PR
[email protected] 

SOURCE AIceberg



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Funding & Business

How Capital One built production multi-agent AI workflows to power enterprise use cases

Published

on


How do you balance risk management and safety with innovation in agentic systems — and how do you grapple with core considerations around data and model selection? In this VB Transform session, Milind Naphade, SVP, technology, of AI Foundations at Capital One, offered best practices and lessons learned from real-world experiments and applications for deploying and scaling an agentic workflow.

Capital One, committed to staying at the forefront of emerging technologies, recently launched a production-grade, state-of-the-art multi-agent AI system to enhance the car-buying experience. In this system, multiple AI agents work together to not only provide information to the car buyer, but to take specific actions based on the customer’s preferences and needs. For example, one agent communicates with the customer. Another creates an action plan based on business rules and the tools it is allowed to use. A third agent evaluates the accuracy of the first two, and a fourth agent explains and validates the action plan with the user. With over 100 million customers using a wide range of other potential Capital One use case applications, the agentic system is built for scale and complexity.

“When we think of improving the customer experience, delighting the customer, we think of, what are the ways in which that can happen?” Naphade said. “Whether you’re opening an account or you want to know your balance or you’re trying to make a reservation to test a vehicle, there are a bunch of things that customers want to do. At the heart of this, very simply, how do you understand what the customer wants? How do you understand the fulfillment mechanisms at your disposal? How do you bring all the rigors of a regulated entity like Capital One, all the policies, all the business rules, all the constraints, regulatory and otherwise?”

Agentic AI was clearly the next step, he said, for internal as well as customer-facing use cases.

Designing an agentic workflow

Financial institutions have particularly stringent requirements when designing any workflow that supports customer journeys. And Capital One’s applications include a number of complex processes as customers raise issues and queries leveraging conversational tools. These two factors made the design process especially complex, requiring a holistic view of the entire journey — including how both customers and human agents respond, react, and reason at every step.

“When we looked at how humans do reasoning, we were struck by a few salient facts,” Naphade said. “We saw that if we designed it using multiple logical agents, we would be able to mimic human reasoning quite well. But then you ask yourself, what exactly do the different agents do? Why do you have four? Why not three? Why not 20?”

They studied customer experiences in the historic data: where those conversations go right, where they go wrong, how long they should take and other salient facts. They learned that it often takes multiple turns of conversation with an agent to understand what the customer wants, and any agentic workflow needs to plan for that, but also be completely grounded in an organization’s systems, available tools, APIs, and organizational policy guardrails.

“The main breakthrough for us was realizing that this had to be dynamic and iterative,” Naphade said. “If you look at how a lot of people are using LLMs, they’re slapping the LLMs as a front end to the same mechanism that used to exist. They’re just using LLMs for classification of intent. But we realized from the beginning that that was not scalable.”

Taking cues from existing workflows

Based on their intuition of how human agents reason while responding to customers, researchers at Capital One developed  a framework in which  a team of expert AI agents, each with different expertise, come together and solve a problem.

Additionally, Capital One incorporated robust risk frameworks into the development of the agentic system. As a regulated institution, Naphade noted that in addition to its range of internal risk mitigation protocols and frameworks,”Within Capital One, to manage risk, other entities that are independent observe you, evaluate you, question you, audit you,” Naphade said. “We thought that was a good idea for us, to have an AI agent whose entire job was to evaluate what the first two agents do based on Capital One policies and rules.”

The evaluator determines whether the earlier agents were successful, and if not, rejects the plan and requests the planning agent to correct its results based on its judgement of where the problem was. This happens in an iterative process until the appropriate plan is reached. It’s also proven to be a huge boon to the company’s agentic AI approach.

“The evaluator agent is … where we bring a world model. That’s where we simulate what happens if a series of actions were to be actually executed. That kind of rigor, which we need because we are a regulated enterprise – I think that’s actually putting us on a great sustainable and robust trajectory. I expect a lot of enterprises will eventually go to that point.”

The technical challenges of agentic AI

Agentic systems need to work with fulfillment systems across the organization, all with a variety of permissions. Invoking tools and APIs within a variety of contexts while maintaining high accuracy was also challenging — from disambiguating user intent to generating and executing a reliable plan.

“We have multiple iterations of experimentation, testing, evaluation, human-in-the-loop, all the right guardrails that need to happen before we can actually come into the market with something like this,” Naphade said. “But one of the biggest challenges was we didn’t have any precedent. We couldn’t go and say, oh, somebody else did it this way. How did that work out? There was that element of novelty. We were doing it for the first time.”

Model selection and partnering with NVIDIA

In terms of models, Capital One is keenly tracking academic and industry research, presenting at conferences and staying abreast of what’s state of the art. In the present use case, they used open-weights models, rather than closed, because that allowed them significant customization. That’s critical to them, Naphade asserts, because competitive advantage in AI strategy relies on proprietary data.

In the technology stack itself, they use a combination of tools, including in-house technology, open-source tool chains, and NVIDIA inference stack. Working closely with NVIDIA has helped Capital One get the performance they need, and collaborate on industry-specific  opportunities in NVIDIA’s library, and prioritize features for the Triton server and their TensoRT LLM.

Agentic AI: Looking ahead

Capital One continues to deploy, scale, and refine AI agents across their business. Their first multi-agentic workflow was Chat Concierge, deployed through the company’s auto business. It was designed to support both auto dealers and customers with the car-buying process.  And with rich customer data, dealers are identifying serious leads, which has improved their customer engagement metrics significantly — up to 55% in some cases.

“They’re able to generate much better serious leads through this natural, easier, 24/7 agent working for them,” Naphade said. “We’d like to bring this capability to [more] of our customer-facing engagements. But we want to do it in a well-managed way. It’s a journey.”



Source link

Continue Reading

Funding & Business

Houthis Say They Hit Red Sea Ship in First Attack This Year

Published

on




Yemen’s Houthis have claimed responsibility for an attack on a ship sailing through the Red Sea on Sunday, in their first strike on merchant shipping since December.



Source link

Continue Reading

Funding & Business

Andrew Left Set to Appear in Court in Push to Toss Fraud Charges

Published

on




Activist short seller Andrew Left is set to appear in federal court Monday in a securities fraud case that shook the industry a year ago, setting up a battle over investor tips and free speech.



Source link

Continue Reading

Trending