Connect with us

AI Research

Zypher Research – Zypher Network develops decentralized trust AI infrastructure towards Agentic autonomy’s SSL moment

Published

on


This article is a submission and does not represent the views of ChainCatcher, nor does it constitute investment advice.

We are pleased to announce that Zypher Network has completed a $7 million financing round to advance our development of decentralized trust infrastructure for AI agents. This round was co-led by UOB Risk Management and Signum Capital, with participation from several institutions including HashKey Capital, Hong Leong Group, Cogitent Ventures, Catcher VC, Hydrogenesis Labs, and DWF Ventures.

Autonomous agents have arrived, but can we trust them?

From OpenAI’s Operator to Nvidia’s Eureka, we are witnessing a historic transformation in the software domain: autonomous agents capable of independent thought, reasoning, and action. These systems are being widely deployed in finance, supply chains, legal services, and infrastructure. By 2035, agent systems are expected to drive an economic scale of $250 billion to $350 billion, reshaping over $10 trillion in the “on-demand services” industry.

However, autonomy without accountability is a dangerous proposition. Current agent systems resemble opaque black boxes, with behaviors that cannot be verified and logic that is inaccessible. In sensitive environments, this lack of transparency poses existential risks—from financial decision-making errors to privacy breaches and regulatory non-compliance.

The internet of the 1990s faced a similar crisis. Without encryption or endpoint verification, trust was extremely fragile. Subsequently, SSL emerged as a standardized layer for secure communication. We believe that the current agent systems are approaching their own SSL moment—cryptographic verifiability will become foundational. Zypher Network is building this trust layer for AI.

Our story: From verifiable applications to agentic infrastructure

Zypher originated in 2023, with a founding team spread across Hong Kong and Silicon Valley, initially developing the first verifiable applications based on ZK co-processors (zero-knowledge co-processors), including on-chain reasoning engines, compliance circuits, and award-winning fully on-chain games. These early products attracted over 1 million on-chain participants, showcasing the potential of real-time proof-driven computation.

But we are looking to the future. We asked ourselves: What are the most pressing and impactful applications for the next decade? The answer is clear—AI agents. Systems based on large language models are evolving into autonomous actors, yet there is no cryptographic framework holding them accountable. We have locked our mission onto this, transforming into verifiable AI infrastructure and officially launching Zypher Network, developing a suite of zero-knowledge protocols suitable for agent trust. In 2025, we expanded this vision through a $7 million financing round, gaining support from investors who resonate with our long-term vision.

Challenges: Trust barriers for decentralized AI

The pace of AI development is outstripping its governance frameworks and institutions. Agents are executing workflows in finance, healthcare, legal services, and logistics, yet most systems lack visibility and auditability. Since 2023, AI agents based on large language models have reshaped the computing paradigm, enabling agents to perform complex tasks with minimal supervision.

An industry survey in 2024 revealed that 65% of companies adopting AI agents list trust and security as their top concerns, with the finance and asset management sectors facing the highest risks. Without decentralized solutions, companies rely on centralized systems, creating single points of failure and violating Web3 principles. Zypher guarantees agent behavior through cryptography, providing the following verification capabilities:

  • The exact system prompts or instructions received.
  • The generated outputs or reasoning results.
  • The results have not been modified and are faithfully transmitted.
    This achieves verifiable autonomy—agents can accept accountability from systems and users while maintaining independence.

Our infrastructure: End-to-end verifiability for autonomous AI

Zypher’s architecture combines a modular zero-knowledge protocol stack with a performance rollup optimized for real-time decentralized AI agent verification—Zytron.

● ZKP-based trust solutions
Our open-source ZKP protocol ensures that AI prompts and reasoning are tamper-proof and privacy-preserving. The flagship solution “Prompt Proof” (zkPrompt) is a groundbreaking innovation. Inspired by zkTLS, zkPrompt employs a Prover-Proxy-LLM architecture to verify AI outputs.

  • The Prover generates ZK proofs, confirming the integrity of the agent’s response.
  • The agent’s signature mechanism supports on-chain verification.
    Compared to frameworks like ezkl, zkPrompt reduces proof generation time by up to 40%, making it suitable for real-time applications.
    For example, DeFi protocols can use zkPrompt to prove that an agent’s portfolio management decisions comply with predefined strategies without revealing the strategies. In asset management, zkPrompt ensures compliance while protecting sensitive data.

Our ZKP suite also includes reasoning verification and model integrity protocols, providing a comprehensive trust layer for AI. We embed this into an open-source RESTful API, allowing developers to seamlessly integrate trust features, whether for Web3 native tools or enterprise solutions.

● Zytron Rollup
Zytron is our dedicated rollup infrastructure, serving as a Layer 2 solution for the BNB chain, optimized for AI workloads with high throughput, low latency, and robust security. Compatible with RISC-V architecture, Zytron supports modular integration with various AI frameworks (from large language models to specialized models).

Key components of Zytron include:

  • Distributed proof protocol: Based on a “verifiable work proof” mechanism, enabling a decentralized network of nodes (“provers”) to compute ZK proofs, ensuring computational integrity. Unlike energy-intensive proof-of-work, our system emphasizes efficiency and fair reward distribution.
  • Computational resource integration: Zytron connects distributed computing resources, supporting AI model hosting, sharing, reasoning, and fine-tuning, giving developers access to scalable infrastructure.
  • API and proof layer: Provides an API layer for seamless model access and a proof layer for verifying computations, ensuring privacy and resistance to censorship.

Zytron addresses scalability challenges by processing thousands of proofs simultaneously, avoiding traditional blockchain bottlenecks. For instance, payment systems can use our API to validate AI-driven transactions in near real-time under high demand. Leveraging the security of the BNB chain, Zytron ensures tamper-proof verification, establishing trust for high-risk AI applications.

● More about Prompt Proof: zk-TLS-like protocol
zkPrompt is Zypher’s flagship protocol, inspired by zkTLS, based on a lightweight zk circuit system that proves the integrity of LLM agent outputs. It introduces four roles:

  • User: Initiates requests.
  • Prover: Processes transmissions and delivers proofs.
  • Agent: Relays data and signs ciphertext.
  • LLM Provider: Generates agent responses.

To prevent tampering, zkPrompt uses agent-based signature verification and efficient ZK decryption correctness proofs. The prover must demonstrate that the returned plaintext matches the agent’s signature and the original content generated by the LLM provider.
The system supports two modes:

  • Public mode: Prompt and response data are on-chain.
  • Private mode: Only hash commitments are recorded, protecting privacy.
    Compared to ezkl, zkPrompt is optimized for real-time verification, suitable for minute-level proof generation for GPT-like models, with minimal overhead.

● AI Secure Browser: An SSL moment for everyday users
For the average user, we designed the AI Secure Browser—a user-friendly real-time monitoring tool that brings cryptographic integrity directly to the application layer. Just as the padlock symbol and HTTPS standard established trust in the early internet, the AI Secure Browser makes verifiability visible and accessible to end users.

This browser acts as a security overlay on any AI interface, providing:

  • Proof-based trust indicators: Each agent interaction comes with a visual marker indicating whether its output has been cryptographically verified against the original prompt via zkPrompt.
  • Anomaly alerts: Detects tampered outputs, unauthorized queries, or unverified completions, and prompts immediately.
  • Verifiable interaction logs: Users can explore the history of agent interactions, viewing the time, manner, and conditions under which responses were generated, all supported by zero-knowledge proofs.
  • Privacy-first operations: Users have complete control over disclosed information. For sensitive use cases, proof commitments can be published without revealing prompt content.

To make trust participatory, the browser also integrates our proof mining incentive layer:

  • Users earn GEM points and collectible NFTs by verifying agent outputs.
  • Active participants build trust by flagging anomalies, verifying cryptographic claims, and enhancing decentralized AI security.
    In short, the AI Secure Browser is to autonomous agents what SSL is to the web: a foundation of integrity at the user level. It brings transparency to previously opaque systems, providing real-time assurance to every user, from DAO contributors to legal teams and financial operators.

Combined with our developer stack, the AI Secure Browser completes Zypher’s vision of a full-stack trust layer for verifiable AI, enabling both system creators and consumers to operate securely and confidently.

● Our roadmap: Expanding AI trust
This $7 million financing propels our ambitious roadmap:

  • Team expansion: Expanding engineering and research teams in Hong Kong, Silicon Valley, and globally to drive cutting-edge cryptography and AI innovation.
  • Infrastructure upgrades: Optimizing Zytron rollup and ZKP protocols to support large-scale AI deployments, improving proof generation efficiency and enhancing model compatibility.
  • Developer and ecosystem growth: Launching new incentive programs and tools to empower developers to use our API and zkPrompt suite across DeFi, customer support, and robotics.
  • Token release preparation: Preparing for a token release based on strong ecosystem appeal to further incentivize participation and decentralization.
  • Deepening protocol integration: Our trust layer has integrated with leading protocols, enhancing cross-chain interoperability and developer experience.
    Our vision is to make Zypher the cornerstone of trusted AI in Web3, enabling developers to create applications that rival centralized systems while upholding decentralized principles.

● Our community: The heart of Zypher
Our community is our greatest asset. Currently, over 1 million on-chain participants have joined our journey through ecosystem programs, API testing, and collaborative activities.

This momentum is also supported by a growing network of strategic partners. Zypher’s trust layer has integrated with leading protocols such as Eliza OS, Mira Network, io.Net, Nexus, Risc Zero, EigenLayer, Particle Network, Fermah, Polyhedra, and ZeroBase, paving the way for interoperability, developer onboarding, and real-world use cases. We take pride in collaborating with the most forward-thinking protocols and developers in Web3. The community has always been at the core of our mission: to make AI trustworthy, transparent, and verifiable for everyone.

In the coming months, we will launch large-scale social and proof mining activities to incentivize decentralized agent verification. These initiatives have already attracted early participation from key agent providers in the network.
About Zypher Network
Zypher Network is a decentralized trust platform that achieves verifiable autonomy for AI agents through zero-knowledge protocols and dedicated rollup infrastructure Zytron. Zypher operates in Hong Kong and Silicon Valley, empowering developers and enterprises to build secure, scalable AI systems for Web3 native and real-world applications.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click “Report”, and we will handle it promptly.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Accelerating discovery: The NVIDIA H200 and the transformation of university research

Published

on

By


The global research landscape is undergoing a seismic shift. Universities worldwide are deploying NVIDIA’s H200 Tensor Core GPUs to power next-generation AI Factories, SuperPODs, and sovereign cloud platforms. This isn’t a theoretical pivot; it’s a real-time transformation redefining what’s possible in scientific discovery, medicine, climate analysis, and advanced education delivery.

The H200 is the most powerful GPU currently available to academia, delivering the performance required to train foundational models, run real-time inference at scale, and enable collaborative AI research across institutions. And with NVIDIA’s Blackwell-based B200 on the horizon, universities investing in H200 infrastructure today are setting themselves up to seamlessly adopt future architectures tomorrow.

Universities powering the AI revolution

This pivotal shift isn’t a future promise but a present reality. Forward-thinking institutions worldwide are already integrating the H200 into their research ecosystems.

Institutions leading the charge include:

  • Oregon State University and Georgia Tech in the US, deploying DGX H200 and HGX clusters.
  • Taiwan’s NYCU and University of Tokyo, pushing high-performance computing boundaries with DGX and GH200-powered systems.
  • Seoul National University, gaining access to a GPU network of over 4,000 H200 units.
  • Eindhoven University of Technology in the Netherlands, preparing to adopt DGX B200 infrastructure.

In Taiwan, national programs like NCHC are also investing in HGX H200 supercomputing capacity, making cutting-edge AI infrastructure accessible to researchers at scale.

Closer to home, La Trobe University is the first in Australia to deploy NVIDIA DGX H200 systems. This investment underpins the creation of ACAMI — the Australian Centre for Artificial Intelligence in Medical Innovation — a world-first initiative focused on AI-powered immunotherapies, med-tech, and cancer vaccine development.

It’s a leap that’s not only bolstering research output and commercial partnerships but also positioning La Trobe as a national leader in AI education and responsible deployment.

Universities like La Trobe are establishing themselves as part of a growing global network of AI research precincts, from Princeton’s open generative AI initiative to Denmark’s national AI supercomputer, Gefion. The question for others is no longer “if”, but “how fast?”

Redefining the campus: How H200 AI infrastructure transforms every discipline

The H200 isn’t just for computer science. Its power is unlocking breakthroughs across:

  • Climate science: hyper-accurate modelling for mitigation and prediction
  • Medical research: from genomics to diagnostics to drug discovery
  • Engineering and material sciences: AI-optimised simulations at massive scale
  • Law and digital ethics: advancing policy frameworks for responsible AI use
  • Indigenous language preservation: advanced linguistic analysis and voice synthesis
  • Adaptive education: AI-driven, personalised learning pathways
  • Economic modelling: dynamic forecasts and decision support
  • Civic AI: real-time, data-informed public service improvements

AI infrastructure is now central to the entire university mission — from discovery and education to innovation and societal impact.

Positioning Australia in the global AI race

La Trobe’s deployment is more than a research milestone — it supports the national imperative to build sovereign AI capability. Australian companies like Sharon AI and ResetData are also deploying sovereign H200 superclusters, now accessible to universities via cloud or direct partnerships.

Universities that move early unlock more than infrastructure. They strengthen research impact, gain eligibility for key AI grants, and help shape Australia’s leadership on the global AI stage.

NEXTDC indispensable role: The foundation for AI innovation

Behind many of these deployments is NEXTDC, Australia’s data centre leader and enabler of sovereign, scalable, and sustainable AI infrastructure.

NEXTDC is already:

  • Hosting Sharon AI’s H200 supercluster in Melbourne in a high-density, DGX-certified, liquid-cooled facility
  • Delivering ultra-low latency connectivity via the AXON fabric — essential for orchestrating federated learning, distributed training, and multi-institutional research
  • Offering rack-ready infrastructure for up to 600kW+, with liquid and immersion cooling on the roadmap
  • Enabling cross-border collaboration with facilities across every Australian capital and proximity to international subsea cable landings

The Cost of inaction: why delay is not an option in the AI race

The global AI race is accelerating fast, and for university leaders, the risk of falling behind is real and immediate. Hesitation in deploying advanced AI infrastructure could lead to lasting disadvantages across five critical areas:

  • Grant competitiveness: Top-tier research funding increasingly requires access to state-of-the-art AI compute platforms.
  • Research rankings: Leading publication output and global standing rely on infrastructure that enables high-throughput, data-intensive AI research.
  • Talent attraction: Students want practical experience with cutting-edge tools. Institutions that can’t provide this will struggle to attract top talent.
  • Faculty recruitment: The best AI researchers will favour universities with robust infrastructure that supports their work.
  • Innovation and commercialisation: Without high-performance GPUs, universities risk slowing their ability to generate start-ups, patents, and economic returns.

Global counterparts are already deploying H100/H200 infrastructure and launching sovereign AI programs. The infrastructure gap is widening fast.

Now is the time to act—lead, don’t lag.
 The universities that invest today won’t just stay competitive. They’ll define the future of AI research and discovery.

NEXTDC

What this means for your institution

For Chancellors, Deans, CTOs and CDOs, the message is clear: the global AI race is accelerating. Delay means risking:

  • Lower grant competitiveness
  • Declining global research rankings
  • Talent loss among students and faculty
  • Missed innovation and commercialisation opportunities

The infrastructure gap is widening — and it won’t wait.

Ready to lead?

The universities that act now will shape the future. Whether it’s training trillion-parameter LLMs, powering breakthrough medical research, or leading sovereign AI initiatives, H200-grade infrastructure is the foundation.

NEXTDC is here to help you build it.

NextDC ad 7

NEXTDC

Want to explore the full article?
Read the complete breakdown of the H200-powered university revolution and how NEXTDC is enabling it: Click here.



Source link

Continue Reading

AI Research

Avalara unveils AI assistant Avi to simplify complex tax research

Published

on


Avalara has announced the launch of Avi for Tax Research, a generative AI assistant embedded within Avalara Tax Research (ATR), aimed at supporting tax and trade professionals with immediate, reliable responses to complex tax law queries.

Avi for Tax Research draws on Avalara’s extensive library of tax content to provide users with rapid, comprehensive answers regarding the tax status of products, audit risk, and precise sales tax rates for specific addresses.

Capabilities outlined

The AI assistant offers several features to advance the workflow of tax and trade professionals.

Among its core capabilities, Avi for Tax Research allows users to instantly verify the taxability of products and services through straightforward queries. The tool delivers responses referencing Avalara’s comprehensive tax database, aiming to ensure both speed and reliability in answering enquiries.

Additional support includes access to up-to-date official guidance to help mitigate audit risks and reinforce defensible tax positions. By providing real-time insights, professionals can proactively adapt to changes in tax regulations without needing to perform extensive manual research.

For businesses operating across multiple locations, Avi for Tax Research enables the generation of precise, rooftop-level sales tax rates tailored to individual street addresses, which can improve compliance accuracy to the level of local jurisdiction requirements.

Designed for ease of use

The assistant is built with an intuitive conversational interface intended to be accessible to professionals across departments, including those lacking a formal tax background.

According to Avalara, this functionality should help improve operational efficiency and collaboration by reducing the skills barrier usually associated with tax research.

Avalara’s EVP and Chief Technology Officer, Danny Fields, described the new capabilities in the context of broader industry trends.

“The tax compliance industry is at the dawn of unprecedented innovation driven by rapid advancements in AI,” said Danny Fields, EVP and Chief Technology Officer of Avalara. “Avalara’s technology mission is to equip customers with reliable, intuitive tools that simplify their work and accelerate business outcomes.”

The company attributes Avi’s capabilities to its two decades of tax and compliance experience, which inform the AI’s underlying content and context-specific decision making. By making use of Avalara’s metadata, the solution is intended to shorten the time spent on manual analysis, offering instant and trusted answers to user questions and potentially allowing compliance teams to allocate more time to business priorities.

Deployment and access

The tool is available immediately to existing ATR customers without additional setup.

New customers have the opportunity to explore Avi for Tax Research through a free trial, which Avalara states is designed to reduce manual effort and deliver actionable information for tax research. Customers can use the AI assistant to submit tax compliance research questions and receive instant responses tailored to their requirements.

Avalara delivers technology aimed at supporting over 43,000 business and government customers across more than 75 countries, providing tax compliance solutions that integrate with leading eCommerce, ERP, and billing systems.

The release of Avi for Tax Research follows continued developments in AI applications for business compliance functions, reflecting the increasing demand for automation and accuracy in global tax and trade environments.



Source link

Continue Reading

AI Research

Tenable Research Warns of Critical AI Tool Vulnerability That Requires Immediate Attention [CVE-2025-49596]

Published

on

By


GUEST RESEARCH:  Tenable Research has identified a critical remote code execution vulnerability (CVE-2025-49596) in Anthropic’s widely adopted MCP Inspector, an open-source tool crucial for AI development. With a CVSS score of 9.4, this flaw leverages default, insecure configurations, leaving organisations exposed by design. MCP Inspector is a popular tool with over 38,000 weekly downloads on npmjs and more than 4,000 stars on GitHub.

Exploitation is alarmingly simple. A visit to a malicious website can fully compromise a workstation, requiring no further user interaction. Attackers can gain persistent access, steal sensitive data, including credentials and intellectual property, and enable lateral movement or deploy malware.

“Immediate action is non-negotiable”, says Rémy Marot, Staff Research Engineer at Tenable. “Security teams and developers should upgrade MCP Inspector to version 0.14.1 or later. This update enforces authentication, binds services to localhost, and restricts trusted origins, closing critical attack vectors. Prioritise robust security policies before deploying AI tools to mitigate these inherent risks.”

For in-depth information about this research, please refer to the detailed blog post published by Tenable’s Research Team.

Please join our community here and become a VIP.

Subscribe to ITWIRE UPDATE Newsletter here
JOIN our iTWireTV our YouTube Community here
BACK TO LATEST NEWS here



Maximising Cloud Efficiency – LUMEN WEBINAR 23 April 2025

According to KPMG, companies typically spend 35% more on cloud than is required to deliver business objectives

The rush to the cloud has led to insufficient oversight, with many organisations struggling to balance the value of cloud agility and innovation against the need for guardrails to control costs.

Join us for an exclusive webinar on Cloud Optimisation.

In this event, the team from Lumen will explain how you can maximise cloud efficiency while reducing cost.

The session will reveal how to implement key steps for effective cloud optimisation.

Register for the event now!


REGISTER!

PROMOTE YOUR WEBINAR ON ITWIRE

It’s all about Webinars.

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.


MORE INFO HERE!



Source link

Continue Reading

Trending