Connect with us

AI Research

Intuition: Rebuilding the Internet for the AI Agent Era

Published

on


This report was written by Tiger Research, analyzing Intuition’s approach to rebuilding web infrastructure for the agentic AI era through atom-based knowledge structuring, token-curated registries for standard consensus, and signal-based trust measurement systems.


Key Takeaways

  • The agentic AI era has arrived. AI agents cannot perform to their full potential because current web infrastructure targets humans and websites use different data formats, while information remains unverified. This makes it hard for agents to understand and process data.

  • Intuition evolves the Semantic Web’s vision through Web3 approaches, addressing existing limitations. The system structures knowledge into Atoms, using Token Curated Registries (TCR) to reach consensus on data usage, while Signals determine how much to trust that data.

  • Intuition could transform the web. The current web resembles unpaved roads, while Intuition creates highways where agents can operate safely, potentially becoming the new infrastructure standard and realizing the true capabilities of the agentic AI era.


1. The Agent Era Begins: Is the Web Infra Enough?

The era of agentic AI is gaining momentum. We can imagine a future where personal agents handle everything from travel planning to complex financial management. But in practice, things are not so simple. The issue is not with AI performance itself. The real limitation lies in today’s web infrastructure.

The web was built for humans to read and interpret through browsers. As a result, it is poorly suited for agents that need to parse semantics and connect relationships across data sources. These limitations are obvious in everyday services. An airline website may list a departure time as “14:30,” while a hotel site shows check-in as “2:30 PM.” Humans immediately understand both as the same time, but agents interpret them as entirely different data formats.

Challenges facing AI Agents Source: Tiger Research

The issue goes beyond formatting differences. A critical challenge is whether agents can trust the data itself. Humans can work with incomplete information by relying on context and prior experience. Agents, by contrast, lack clear standards for assessing provenance or reliability. This leaves them vulnerable to false inputs, flawed conclusions, and even hallucinations.

In the end, even the most advanced agents cannot thrive in such conditions. They are like F1 cars: no matter how powerful, they cannot reach full speed on an unpaved road—unstructured data. And if misleading signsunreliable dataare scattered along the route, they may never reach the finish line.

2. Web’s Technical Debt: Rebuilding the Foundation

This issue was first raised more than 20 years ago by Tim Berners-Lee, the creator of the World Wide Web, through his proposal for the Semantic Web.

The Semantic Web’s core idea is simple: structure web information so machines can understand it, not just as human-readable text. For example, “Tiger Research was founded in 2021” is clear to humans but appears as mere character strings to machines. The Semantic Web structures this as “Tiger Research (subject) – founded (predicate) – 2021 (object)” so machines can interpret the meaning.

This approach was ahead of its time but ultimately never came to be. The biggest reason was implementation challenges. Reaching consensus on data formats and usage standards proved difficult, and more importantly, building and maintaining vast datasets through voluntary user contributions was nearly impossible. Contributors received no direct rewards or benefits. Additionally, the question of whether the created data could be trusted remained an unsolved problem.

Nevertheless, the Semantic Web’s vision remains valid. The principle that machines should understand and utilize data at the semantic level hasn’t changed. In the AI era, this need has become even more critical.

3. Intuition: Reviving the Semantic Web in a Web3 Way

Intuition

Intuition evolves the Semantic Web’s vision through Web3 approaches to address existing limitations. The core lies in creating a system that incentivizes users to voluntarily participate in accumulating and verifying quality structured data. This systematically builds knowledge graphs that are machine-readable, have clear provenance, and are verifiable. Ultimately, this provides the foundation for intelligent agents to operate reliably and brings us closer to the future we envision.

3.1. Atoms: Building blocks of Knowledge

Intuition starts by dividing all knowledge into minimal units called Atoms. Atoms represent concepts like people, dates, organizations, or attributes. Each has a unique identifier (using tech like Decentralized Identifiers, or DIDs) and exists independently. Every Atom records contributor information so you can verify who added what information and when.

The reason for breaking knowledge into Atoms is clear. Information typically comes in complex sentences, and machines like agents have structural limitations in parsing and understanding such composite information on their own. They also struggle to determine which parts are accurate and which are incorrect.

How Intuition Structures Knowledge

  • Subject: Tiger Research

  • Predicate: founded In

  • Object: 2021

Consider the sentence “Tiger Research was founded in 2021.” This could be true, or only parts might be wrong. Whether this organization actually exists, whether “founding date” is an appropriate attribute, and whether 2021 is correct each require individual verification. But treating the entire sentence as one unit makes it hard to distinguish which elements are accurate and which are false. Tracking the source of each piece of information becomes complex too.

Atoms solve this problem. Define each as independent Atoms like [Tiger Research], [founded In], [2021], and you can record sources and verify each element individually.

How Intuition Structures Knowledge

Atoms are not just tools for dividing information – they work like Lego blocks that can be combined. For example, the individual Atoms [Tiger Research], [Founded In], and [2021] connect to form a Triple. This creates meaningful information: “Tiger Research was founded in 2021.” This follows the same structure as Triples in the Semantic Web’s RDF (Resource Description Framework).

These Triples can also become Atoms themselves. The Triple “Tiger Research was founded in 2021” can expand into a new Triple like “Tiger Research’s founding date of 2021 is based on business records.” Through this method, Atoms and Triples combine repeatedly, evolving from small units into larger structures.

The result is that Intuition builds fractal knowledge graphs that can expand infinitely from basic elements. Even complex knowledge can be broken down for verification and then recombined.

3.2. Token Curated Registries: Market-Driven Consensus

If Intuition provides a conceptual framework for structuring knowledge through Atoms, three key questions now remain: Who will contribute to creating these Atoms? Which Atoms can be trusted? And when different Atoms compete to represent the same concept, which one becomes the standard?

How Atoms Work Source: Intuition Lightpaper

Intuition solves this problem through Token Curated Registries (TCRs). TCRs filter entries based on what the community values. Token staking reflects these judgments. Users stake TRUST, Intuition’s native token, when they propose new Atoms, Triples, or data structures. Other participants stake tokens on the supporting side if they find the proposal useful, or on the opposing side if they don’t. They can also stake on competing alternatives. Users earn rewards if their chosen data gets used frequently or receives high ratings. They lose part of their stake if not.

TCRs verify individual attestations, but they also solve the ontology standardization problem effectively. Ontology standardization means deciding which approach becomes the common standard when multiple ways exist to express the same concept. Distributed systems face the challenge of reaching this consensus without centralized coordination.

Consider two competing predicates for product reviews: [hasReview] and [customerFeedback]. If [hasReview] gets introduced first and many users build on it, early contributors own token stakes in that success. Meanwhile, [customerFeedback] supporters gain economic incentives to gradually switch to the more widely adopted standard.

This mechanism mirrors how the ERC-20 token standard gained natural adoption. Developers who adopted ERC-20 got clear compatibility benefits—direct integration with existing wallets, exchanges, and dApps. These advantages naturally drew developers to ERC-20. This showed that market-driven choices alone can solve standardization problems in distributed environments. TCRs work on similar principles. They reduce agents’ struggles with fragmented data formats and provide an environment where information can be understood and processed more consistently.

3.3. Signal: Building Trust-Based Knowledge Networks

Intuition structures knowledge through Atoms and Triples and uses incentives to reach consensus on “what actually gets used.”

One last challenge remains: How much can we trust that information? Intuition introduces Signal to fill this gap. Signal expresses user trust or distrust toward specific Atoms or Triples. It goes beyond simply recording that data exists – it captures how much support the data receives in different contexts. Signal systematizes the social verification process we use in real life, like when we judge information based on “a reliable person recommended this” or “experts verified it.

Signal accumulates in three ways. First, explicit signal involves intentional evaluations users make, like token staking. Second, implicit signal emerges naturally from usage patterns like repeated queries or applications. Finally, transitive signal creates relational effects—when someone I trust supports information, I tend to trust it more too. These three combine to create a knowledge network showing who trusts what, how much, and in what way.

Trust Graph Source: Intuition Lightpaper

Intuition provides this through Reality Tunnels. Reality Tunnels offer personalized perspectives for viewing data. Users can configure tunnels that prioritize expert group evaluations, value close friends’ opinions, or reflect specific community wisdom. Users can choose trusted tunnels or switch between multiple tunnels for comparison. Agents can also use specific interpretive approaches for particular purposes. For example, selecting a tunnel that reflects Vitalik Buterin’s trusted network would set an agent to interpret information and make decisions from “Vitalik’s perspective.”

All signals get recorded on-chain. Users can transparently verify why specific information seems trustworthy, which addresses serve as sources, who vouches for it, and how many tokens are staked. This transparent trust formation process lets users verify evidence directly rather than blindly accepting information. Agents can also use this foundation to make judgments that fit individual contexts and perspectives.

4. What If Intuition Becomes Next Web Infra?

Intuition’s infrastructure is not just a conceptual idea but a practical solution that addresses problems agents face in today’s web environment.

How Intuition enables deterministic results

The current web is filled with fragmented data and unverified information. Intuition transforms data into deterministic knowledge graphs that give clear, consistent results to any query. Token-based signals and curation processes verify this data. Agents can make clear decisions without relying on guesswork. This simultaneously improves accuracy, speed, and efficiency.

Intuition also provides the foundation for agent collaboration. Standardized data structures let different agents understand and communicate information the same way. Just as ERC-20 created token compatibility, Intuition’s knowledge graphs create an environment where agents can cooperate based on consistent data.

Intuition goes beyond agent-only infrastructure to function as a foundational layer all digital services can share. It can replace trust systems that each platform currently builds individually—Amazon’s reviews, Uber’s ratings, LinkedIn’s recommendations—with one unified foundation. Just as HTTP provides common communication standards for the web, Intuition provides standard protocols for data structure and trust verification.

The most important change is data portability. Users directly own the data they create and can use it anywhere. Data isolated in individual platforms will connect and reshape the entire digital ecosystem.

5. Rebuilding the Foundation for the Coming Agentic Era

Intuition’s goal is not simple technical improvement. It aims to overcome the technical debt accumulated over the past 20 years and fundamentally redesign web infrastructure. When the Semantic Web was first proposed, the vision was clear. But it lacked incentives to drive participation. Even if their vision had been realized, the benefits remained unclear.

The situation has changed. AI advances are making the agentic era a reality. AI agents now go beyond simple tools. They perform complex tasks on our behalf. They make autonomous decisions. They collaborate with other agents. These agents need fundamental innovation in existing web infrastructure to operate effectively.

Balaji quote Source: Balaji

As Balaji, former CTO of Coinbase, points out, we need to build proper infrastructure for these agents to operate. The current web resembles unpaved roads rather than highways where agents can move safely on trustworthy data. Each website has different structures and formats. Information is unreliable. Data remains unstructured and agents struggle to understand it. This creates major barriers for agents to perform accurate and efficient work.

Intuition seeks to rebuild the web to meet these demands. It aims to build standardized data structures that agents can easily understand and use. It wants reliable information verification systems. It needs protocols that enable smooth interaction between agents. This resembles how HTTP and HTML created web standards in the early internet days. It represents an attempt to establish new standards for the agentic era.

Challenges remain, of course. The system cannot function properly without sufficient participation and network effects. Critical mass requires considerable time and effort. Overcoming the inertia of existing web ecosystems is never easy. Establishing new standards presents difficulties. But this is a challenge that must be solved. Intuition’s proposed rebase will overcome these challenges. It will open new possibilities for the agentic era that is just beginning to be imagined.


Dive deep into Asia’s Web3 market with Tiger Research.

Be among the 16,000+ pioneers who receive exclusive market insights.

>> Subscribe <<


Disclaimer

This report was partially funded by Intuition. It was independently produced by our researchers using credible sources. The findings, recommendations, and opinions are based on information available at publication time and may change without notice. We disclaim liability for any losses from using this report or its contents and do not warrant its accuracy or completeness. The information may differ from others’ views. This report is for informational purposes only and is not legal, business, investment, or tax advice. References to securities or digital assets are for illustration only, not investment advice or offers. This material is not intended for investors.

Terms of Usage

Tiger Research allows the fair use of its reports. ‘Fair use’ is a principle that broadly permits the use of specific content for public interest purposes, as long as it doesn’t harm the commercial value of the material. If the use aligns with the purpose of fair use, the reports can be utilized without prior permission. However, when citing Tiger Research’s reports, it is mandatory to 1) clearly state ‘Tiger Research’ as the source, 2) include the Tiger Research logo following brand guideline. If the material is to be restructured and published, separate negotiations are required. Unauthorized use of the reports may result in legal action.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

University Spinout TransHumanity secures £400k | News and events

Published

on


TransHumanity Ltd., a spinout from Loughborough University, has secured approximately £400,000 in pre-seed investment. The round was led by SFC Capital, the UK’s most active seed-stage investor, with additional investment from Silicon Valley-based Plug and Play.

TransHumanity’s vision is to empower faster, smarter human decisions by transforming data into accessible intelligence using large language model based agentic AI. 

Agentic AI refers to artificial intelligence systems that collaborate with people to reach specific goals, understanding and responding in plain English. These systems use AI “agents” — models that can gather information, make suggestions, and carry out tasks in real time — helping people solve problems more quickly and effectively.

TransHumanity’s first product, AptIq, is designed to help transport authorities quickly analyse transport data and models, turning days of analysis into seconds. 

By simply asking questions in plain English, users can gain instant insights to support key initiatives like congestion reduction, road safety, creation of business cases and net-zero targets.

Dr Haitao He, Co-founder and Director of TransHumanity, said: “I am proud to see my rigorous research translated into trusted real-world AI innovation for the transport sector. With this investment, we can now realise my Future Leaders Fellowship vision, scaling a technology that empowers authorities across the UK to deliver integrated, net-zero transport.”

Developed from rigorous research by Dr Haitao He, a UKRI Future Leaders Fellow in Transport AI at Loughborough University, AptIq, previously known as TraffEase, has already garnered significant recognition. 

The technology was named a Top 10 finalist for the 2024 Manchester Prize for AI innovation and was recently highlighted as one of the Top 40 UK tech start-ups at London Tech Week by the UK Department for Business and Trade.

Adam Beveridge, Investment Principal at SFC Capital, said: “We are excited to back TransHumanity. The combination of cutting-edge research, a proven founding team, clear market demand, and positive societal impact makes this exactly the kind of high-growth venture we are committed to supporting.”

AptIq is currently in a test deployment with Nottingham City Council and Transport for Greater Manchester, with plans to expand to other city, regional, and national authorities across the UK within the next 12 months.

With a product roadmap that includes diverse data sources, advanced analytics and giving the user full control over the AI tool when required, interest from the transport sector is already high. Professor Nick Jennings, Vice-Chancellor and President of Loughborough University, noted: “I am delighted to see TransHumanity fast-tracked from lab to investment-ready spinout.

This journey was accelerated by TransHumanity’s selection as a finalist in the prestigious Manchester Prize and shows what’s possible when the University’s ambition aligns with national innovation policy.”



Source link

Continue Reading

AI Research

Legal-Ready AI: 7 Tips for Engineers Who Don’t Want to Be Caught Flat-Footed

Published

on


An oversimplified approach I have taken in the past to explain wisdom is to share that “We don’t know what we don’t know until we know it.” This absolutely applies to the fast-moving AI space, where unknowingly introducing legal and compliance risk through an organization’s use of AI is a top concern among IT leaders. 

We’re now building systems that learn and evolve on their own, and that raises new questions along with new kinds of risk affecting contracts, compliance, and brand trust.

At Broadcom, we’ve adopted what I’d call a thoughtful ‘move smart and then fast’’ approach. Every AI use case requires sign-off from both our legal and information security teams. Some folks may complain, saying it slows them down. But if you’re moving fast with AI and putting sensitive data at risk, you’re also inviting trouble if you don’t also move smart.

Here are seven things I’ve learned about collaborating with legal teams on AI projects.

1. Partner with Legal Early On

Don’t wait until the AI service is built to bring legal in. There’s always the risk that choices you make about data, architecture, and system behavior can create regulatory headaches or break contracts later on.

Besides, legal doesn’t need every answer on day one. What they do need is visibility into the gray areas. What data are you using and producing? How does the model make decisions? Could those decisions shift over time? Walk them through what you’re building and flag the parts that still need figuring out.

2. Document Your Decisions as You Go

AI projects move fast with teams needing to make dozens of early decisions on everything from data sources to training logic. So, it’s only natural that a few months later, chances are no one remembers why those choices were made. Then someone from compliance shows up with questions about those choices, and you’ve got nothing to point to.

To avoid that situation, keep a simple log as you work. Then, should a subsequent audit or inquiry occur, you’ll have something solid to help answer any questions.

3. Build Systems You Can Explain

Legal teams need to understand your system so they can explain it to regulators, procurement officers, or internal risk reviewers. If they can’t, there’s the risk that your project could stall or even fail after it ships.

I’ve seen teams consume SaaS-based AI services  without realizing the provider could swap out a backend AI model without their knowledge. If that leads to changes in the system’s behavior behind the scenes, it could redirect your data in ways you didn’t intend. That’s one reason why you’ve got to know your AI supply chain, top to bottom. Ensure that services you build or consume have end-to-end auditability of the AI software supply chain. Legal can’t defend a system if they don’t understand how it works.

4. Watch Out for Shadow AI

Any engineer can subscribe to an AI service and accept the provider’s terms without knowing they don’t have the authority to do that on behalf of the company.

That exposes the organization to major risk. An engineer might accidentally agree to data-sharing terms that violate regulatory restrictions or expose sensitive customer data to a third party.

And it’s not just deliberate use anymore. Run a search in Google and you’re already getting AI output. It’s everywhere. The best way to avoid this is by building a culture where employees are aware of the legal boundaries. You can give teams a safe place to experiment, but at the same time, make sure you know what tools they’re using and what data they’re touching.

5. Help Legal Navigate Contract Language

AI systems get tangled in contract language; there are ownership rights, retraining rules, model drift, and more. Most engineers aren’t trained to spot those issues, but we’re the ones who understand how the systems behave.

That’s another reason why you’ve got to know your AI supply chain, top to bottom. In this case, when legal needs our help in reviewing vendor or customer agreements to put the contractual language into the appropriate technical context. What happens when the model changes? How are sensitive data sets safeguarded from being indexed or accessed via AI agents such as those that use Model Context Protocol (MCP)? We can translate the technical behavior into simple English—and that goes a long way toward helping the lawyers write better contracts.

6. Design with Auditability in Mind

AI is developing rapidly, with legal frameworks, regulatory requirements, and customer expectations evolving to keep pace. You need to be prepared for what might come next. 

Can you explain where your training data came from? Can you show how the model was tested for bias? Can you justify how it works? If someone from a regulatory body walked in tomorrow, would you be ready?

Design with auditability in mind. Especially when AI agents are chained together, you need to be able to prove that identity and access controls are enforced end-to-end. 

7. Handle Customer Data with Care

We don’t get to make decisions on behalf of our customers about how their data gets used. It’s their data. And when it’s private, it shouldn’t be fed to a model. Period. 

You’ve got to be disciplined about what data gets ingested. If your AI tool indexes everything by default, that can get messy fast. Are you touching private logs or passing anything to a hosted model without realizing it? Support teams might need access to diagnostic logs but that doesn’t mean third-party models should touch them. Tools are rapidly evolving that can generate comparable synthetic data devoid of any customer private data that could help with support use cases for example, but these tools and techniques should be fully vetted with your legal and CISO organizations prior to using them. 

The Reality

The engineering ethos is to move fast. But since safety and trust are on the line, you need to move smart, which means it’s okay if things take a little longer. The extra steps are worth it when they help protect your customers and your company.

Nobody has this all figured out. So ask questions by talking to people who’ve handled this kind of work before. The goal isn’t perfection—it’s to make smart, careful progress. For enterprises, the AI race isn’t a question of “Who’s best?” but rather “Who’s leveraging AI safely to drive the best business outcomes.” 



Source link

Continue Reading

AI Research

Progress Unveils Subsidiary for AI-Driven Digital Upgrade

Published

on


Progress Software, a company offering artificial intelligence-powered digital experience and infrastructure software, has launched Progress Federal Solutions, a wholly owned subsidiary that aims to deliver AI-powered technologies to the federal, defense and public sectors.

Progress Federal Solutions to Boost Digital Transformation

The company said Monday the new subsidiary, announced during the Progress Data Platform Summit at the International Spy Museum in Washington, D.C., is intended to fast-track federal agencies’ digital modernization efforts, meet compliance requirements, and advance AI and data initiatives. The subsidiary leverages MarkLogic’s data management and integration expertise, a platform that Progress Software acquired in 2023.

Progress Federal Solutions functions independently but will offer the company’s full technology portfolio, including Progress Data Platform, Progress Sitefinity, Progress Chef, Progress LoadMaster and Progress MOVEit. These will be available to the public sector through Carahsoft Technology‘s reseller partners and contract vehicles.

Remarks From Progress Federal Solutions, Carahsoft Executives 

“Federal and defense agencies are embracing data-centric strategies and modernizing legacy systems at a faster pace than ever. That’s why we focus on enabling data-driven decision-making, faster time to value and measurable ROI,” said Cori Moore, president of Progress Federal Solutions.

“Progress is a trusted provider of AI-enabled solutions that address complex data, infrastructure and digital experience needs. Their technologies empower government agencies to build high-impact applications, automate operations and scale securely to meet program goals,” said Michael Shrader, vice president of intelligence and innovative solutions at Carahsoft.





Source link

Continue Reading

Trending