AI Insights
The End of the Internet As We Know It
The internet as we know it runs on clicks. Billions of them. They fuel ad revenue, shape search results, and dictate how knowledge is discovered, monetized, and, at times, manipulated. But a new wave of AI powered browsers is trying to kill the click. They’re coming for Google Chrome.
On Wednesday, the AI search startup Perplexity officially launched Comet, a web browser designed to feel more like a conversation than a scroll. Think of it as ChatGPT with a browser tab, but souped up to handle your tasks, answer complex questions, navigate context shifts, and satisfy your curiosity all at once.
Perplexity pitches Comet as your “second brain,” capable of actively researching, comparing options, making purchases, briefing you for your day, and analyzing information on your behalf. The promise is that it does all this without ever sending you off on a wild hyperlink chase across 30 tabs, aiming to collapse “complex workflows into fluid conversations.”
“Agentic AI”
The capabilities of browsers like Comet point to the rapid evolution of agentic AI. This is a cutting-edge field where AI systems are designed not just to answer questions or generate text, but to autonomously perform a series of actions and make decisions to achieve a user’s stated goal. Instead of you telling the browser every single step, an agentic browser aims to understand your intent and execute multi-step tasks, effectively acting as an intelligent assistant within the web environment. “Comet learns how you think, in order to think better with you,” Perplexity says.
Comet’s launch throws Perplexity into direct confrontation with the biggest gatekeeper of the internet: Google Chrome. For decades, Chrome has been the dominant gateway, shaping how billions navigate the web. Every query, every click, every ad. It’s all been filtered through a system built to maximize user interaction and, consequently, ad revenue. Comet is trying to blow that model up, fundamentally challenging the advertising-driven internet economy.
And it’s not alone in this ambitious assault. OpenAI, the maker of ChatGPT, is reportedly preparing to unveil its own AI powered web browser as early as next week, according to Reuters. This tool will likely integrate the power of ChatGPT with Operator, OpenAI’s proprietary web agent. Launched as a research preview in January 2025, OpenAI’s Operator is an AI agent capable of autonomously performing tasks through web browser interactions. It leverages OpenAI’s advanced models to navigate websites, fill out forms, place orders, and manage other repetitive browser-based tasks.
Operator is designed to “look” at web pages like a human, clicking, typing, and scrolling, aiming to eventually handle the “long tail” of digital use cases. If integrated fully into an OpenAI browser, it could create a full-stack alternative to Google Chrome and Google Search in one decisive move. In essence, OpenAI is coming for Google from both ends: the browser interface and the search functionality.
Goodbye clicks. Hello cognition
Perplexity’s pitch is simple and provocative: the web should respond to your thoughts, not interrupt them. “The internet has become humanity’s extended mind, while our tools for using it remain primitive,” the company stated in its announcement, advocating for an interface as fluid as human thought itself.
Instead of navigating through endless tabs and chasing hyperlinks, Comet promises to run on context. You can ask it to compare insurance plans. You can ask it to summarize a confusing sentence or instantly find that jacket you forgot to bookmark. Comet promises to “collapse entire workflows” into fluid conversations, turning what used to be a dozen clicks into a single, intuitive prompt.
If that sounds like the end of traditional Search Engine Optimization (SEO) and the death of the familiar “blue links” of search results, that’s because it very well could be. AI browsers like Comet don’t just threaten individual publishers and their traffic; they directly threaten the very foundation of Google Chrome’s ecosystem and Google Search’s dominance, which relies heavily on directing users to external websites.
Google’s Grip is Slipping
Google Search has already been under considerable pressure from AI native upstarts like Perplexity and You.com. Its own attempts at deeper AI integration, such as the Search Generative Experience (SGE), have drawn criticism for sometimes producing “hallucinations” (incorrect information) and awkward summaries. Simultaneously, Chrome, Google’s dominant browser, is facing its own identity crisis. It’s caught between trying to preserve its massive ad revenue pipeline and responding to a wave of AI powered alternatives that don’t rely on traditional links or clicks to deliver useful information.
Comet doesn’t just sidestep the old ad driven model, it fundamentally breaks it. There’s no need to sort through 10 blue links. No need to open 12 tabs to compare specifications, prices, or user reviews. With Comet, you just ask, and let the browser do the work.
OpenAI’s upcoming browser could deepen that transformative shift even further. If it is indeed designed to keep user interactions largely inside a ChatGPT-like interface instead of linking out, it could effectively create an entirely new, self-contained information ecosystem. In such a future, Google Chrome would no longer be the indispensable gateway for knowledge or commerce.
What’s at Stake: Redefining the Internet
If Comet or OpenAI’s browser succeed, the impact won’t be limited to just disrupting search. They will fundamentally redefine how the entire internet works. Publishers, advertisers, online retailers, and even traditional software companies may find themselves disintermediated—meaning their direct connection to users is bypassed—by AI agents. These intelligent agents could summarize their content, compare their prices, execute their tasks, and entirely bypass their existing websites and interfaces.
It’s a new, high-stakes front in the war for how humans interact with information and conduct their digital lives. The AI browser is no longer a hypothetical concept. It’s here.
AI Insights
Artificial Intelligence Coverage Under Cyber Insurance
A small but growing number of cyber insurers are incorporating language into their policies that specifically addresses risks from artificial intelligence (AI). The June 2025 issue of The Betterley Report’s Cyber/Privacy Market Survey identifies at least three insurers that are incorporating specific definitions or terms for AI. This raises an important question for policyholders: Does including specific language for AI in a coverage grant (or exclusion) change the terms of coverage offered?
To be sure, at present few cyber policies expressly address AI. Most insurers appear to be maintaining a “wait and see” approach; they are monitoring the risks posed by AI, but they have not revised their policies. Nevertheless, a few insurers have sought to reassure customers that coverage is available for AI-related events. One insurer has gone so far as to state that its policy “provides affirmative cover for cyber attacks that utilize AI, ensuring that the business is covered for any of the losses associated with such attacks.” To the extent that AI is simply one vector for a data breach or other cyber incident that would otherwise be an insured event, however, it is unclear whether adding AI-specific language expands coverage. On the other side of the coin, some insurers have sought to limit exposure by incorporating exclusions for certain AI events.
To assess the impact of these changes, it is critical to ask: What does artificial intelligence even mean?
This is a difficult question to answer. The field of AI is vast and constantly evolving. AI can curate social media feeds, recommend shows and products to consumers, generate email auto-responses, and more. Banks use AI to detect fraud. Driving apps use it to predict traffic. Search engines use it to rank and recommend search results. AI pervades daily life and extends far beyond the chatbots and other generative AI tools that have been the focus of recent news and popular attention.
At a more technical level, AI also encompasses numerous nesting and overlapping subfields. One major subfield, machine learning, encompasses techniques ranging from linear regression to decision trees. It also includes neural networks, which, when layered together, can be used to power the subfield of deep learning. Deep learning, in turn, is used by the subfield of generative AI. And generative AI itself can take different forms, such as large language models, diffusion models, generative adversarial networks, and neural radiance fields.
That may be why most insurers have been reluctant to define artificial intelligence. A policy could name certain concrete examples of AI applications, but it would likely miss many others, and it would risk falling behind as AI was adapted for other uses. The policy could provide a technical definition, but that could be similarly underinclusive and inflexible. Even referring to subsets such as “generative AI” could run into similar issues, given the complex techniques and applications for the technology.
The risk, of course, is that by not clearly defining artificial intelligence, a policy that grants or excludes coverage for AI could have different coverage consequences than either the insurer or insured expected. Policyholders should pay particular attention to provisions purporting to exclude loss or liability from AI risks, and consider what technologies are in use that could offer a basis to deny coverage for the loss. We will watch with interest cyber insurers’ approach to AI — will most continue to omit references to AI, or will more insurers expressly address AI in their policies?
Listen to this article
This article was co-authored by Anna Hamel
AI Insights
Code Green or Code Red? The Untold Climate Cost of Artificial Intelligence
As the world races to harness artificial intelligence, few are pausing to ask a critical question: What is AI doing to the planet?
AI is being heralded as a game-changer in the global fight against climate change. AI is already assisting scientists in modeling rising temperatures and extreme weather phenomena, enabling decision-making bodies to predict and prepare for unexpected weather, while allowing energy systems to become smarter and more efficient. According to the World Economic Forum, AI has the potential to contribute up to 5.1 trillion dollars annually to the global economy, under the condition that it is deployed sustainably during the climate transition (WEF, 2025).
Beneath the sleek interfaces and climate dashboards lies a growing environmental cost. The widespread use of generative AI, in particular, is creating a new carbon frontier, one that we’re just beginning to untangle and understand.
Training large-scale AI models is energy-intensive, according to a 2024 MIT report. Training a single GPT-3 sized model can consume enough electricity to power almost 120 U.S. homes for a year, which totals up to 1.300 megawatt-hours of electricity. AI systems, once deployed, are not static, since they continue to consume energy each time a user interacts with them. For example, an AI-generated image may require as much energy as watching a short video on an online platform, while large language model queries require almost 10 times more energy than a typical Google search (MIT, 2024).
As AI becomes embedded into everything from online search to logistics and social media, this energy use is multiplying at scale. The International Energy Agency (IEA) warns that by 2026, data center electricity consumption could double globally, driven mainly by the rise of AI and cryptocurrency. Taking into account the recent developments regarding the Digital Euro, the discussion instantly receives more value. Without rapid decarbonization of energy grids, this could significantly increase global emissions, undermining progress on climate goals (IEA,2024).
Sotiris Anastasopoulos/ With data from the IEA’s official website.
The climate irony is real: AI is both the solution and the multiplier to Earth’s climate challenges.
Still, when used responsibly, AI remains a powerful ally. The UNFCCC’s 2023 “AI for Climate Action” roadmap outlines dozens of promising, climate-friendly applications. AI can detect deforestation from satellite imagery, track methane leaks, help decarbonize supply chains, and forecast the spread of wildfires. In agriculture, AI systems can optimize irrigation and fertilizer use, helping reduce emissions and protect soil. In the energy sector, AI enables real-time management of grids, integrating variable sources like solar and wind while improving reliability. But to unlock this potential, the conversation around AI must evolve, from excitement about its capabilities to accountability for its impact.
This starts with transparency. Today, few AI developers publicly report the energy or emissions cost of training and running their models. That needs to change. The IEA calls for AI models to be accompanied by “energy use disclosures” and impact assessments. Governments and regulators should enforce such standards, similarly to industrial emissions or vehicle efficiency (UNFCC, 2023).
Second, green infrastructure must become the default. Data centers must be powered by renewable energy, not fossil fuels. AI models must be optimized for efficiency, not just performance. Instead of racing toward ever-larger models, we should ask what the climate cost of model inflation is and if it’s worth it (UNFCC, 2023).
Third, we need to question the uses of AI itself. Not every application is essential. Does society actually benefit from energy-intensive image generation tools for trivial entertainment or advertising? While AI can accelerate climate solutions, it can also accelerate consumption, misinformation, and surveillance. A climate-conscious AI agenda must weigh trade-offs, not just celebrate innovation (UNFCC,2023).
Finally, equity matters. As the UNFCC report emphasizes, the AI infrastructure powering the climate transition is heavily concentrated in the Global North. Meanwhile, the Global South, home to many of the world’s most climate-vulnerable populations, lacks access to these tools, data, and services. An inclusive AI-climate agenda must invest in capacity-building, data access, and technological advancements to ensure no region is left behind (UNFCC, 2023).
Artificial intelligence is not green or dirty by its nature. Like all tools, its impact depends on how and why we use it. We are still early in the AI revolution to shape its trajectory, but not for long.
The stakes are planetary. If deployed wisely, AI could help the transition to a net-zero future. If mismanaged, it risks becoming just another accelerant of a warming world.
Technology alone will not solve the climate crisis. But climate solutions without responsible technology are bound to fail.
*Sotiris Anastasopoulos is a student researcher at the Institute of European Integration and Policy of the UoA. He is an active member of YCDF and AEIA and currently serves as a European Climate Pact Ambassador.
This op-ed is part of To BHMA International Edition’s NextGen Corner, a platform for fresh voices on the defining issues of our time
AI Insights
AI hallucination in Mike Lindell case serves as a stark warning : NPR
MyPillow CEO Mike Lindell arrives at a gathering of supporters of Donald Trump near Trump’s residence in Palm Beach, Fla., on April 4, 2023. On July 7, 2025, Lindell’s lawyers were fined thousands of dollars for submitting a legal filing riddled with AI-generated mistakes.
Octavio Jones/Getty Images
hide caption
toggle caption
Octavio Jones/Getty Images
A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn’t exist.
Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February filled with more than two dozen mistakes — including hallucinated cases, meaning fake cases made up by AI tools, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday.
“Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” Wang wrote in her decision. “Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice.”
The use of AI by lawyers in court is not, itself illegal. But Wang found the lawyers violated a federal rule that requires lawyers to certify that claims they make in court are “well grounded” in the law. Turns out, fake cases don’t meet that bar.
Kachouroff and DeMaster didn’t respond to NPR’s request for comment.
The error-riddled court filing was part of a defamation case involving Lindell, the MyPillow creator, President Trump supporter and conspiracy theorist known for spreading lies about the 2020 election. Last month, Lindell lost this case being argued in front of Wang. He was ordered to pay Eric Coomer, a former employee of Denver-based Dominion Voting Systems, more than $2 million after claiming Coomer and Dominion used election equipment to flip votes to former President Joe Biden.
The financial sanctions, and reputational damage, for the two lawyers are a stark reminder for attorneys who, like many others, are increasingly using artificial intelligence in their work, according to Maura Grossman, a professor at the University of Waterloo’s David R. Cheriton School of Computer Science and an adjunct law professor at York University’s Osgoode Hall Law School.
Grossman said the $3,000 fines “in the scheme of things was reasonably light, given these were not unsophisticated lawyers who just really wouldn’t know better. The kind of errors that were made here … were egregious.”
There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases, Grossman said. It’s become a familiar trend in courtrooms across the country: Lawyers are sanctioned for submitting motions and other court filings filled with case citations that are not real and created by generative AI.
Damien Charlotin tracks court cases from across the world where generative AI produced hallucinated content and where a court or tribunal specifically levied warnings or other punishments. There are 206 cases identified as of Thursday — and that’s only since the spring, he told NPR. There were very few cases before April, he said, but for months since there have been cases “popping up every day.”
Charlotin’s database doesn’t cover every single case where there is a hallucination. But he said, “I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it’s very embarrassing for everyone involved.”
What went wrong in the MyPillow filing
The $3,000 fine for each attorney, Judge Wang wrote in her order this week, is “the least severe sanction adequate to deter and punish defense counsel in this instance.”
The judge wrote that the two attorneys didn’t provide any proper explanation of how these mistakes happened, “most egregiously, citation of cases that do not exist.”
Wang also said Kachouroff and DeMaster were not forthcoming when questioned about whether the motion was generated using artificial intelligence.
Kachouroff, in response, said in court documents that it was DeMaster who “mistakenly filed” a draft version of this filing rather than the right copy that was more carefully edited and didn’t include hallucinated cases.
But Wang wasn’t persuaded that the submission of the filing was an “inadvertent error.” In fact, she called out Kachouroff for not being honest when she questioned him.
“Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence,” Wang wrote.
Grossman advised other lawyers who find themselves in the same position as Kachouroff to not attempt to cover it up, and fess up to the judge as soon as possible.
“You are likely to get a harsher penalty if you don’t come clean,” she said.
An illustration picture shows ChatGPT artificial intelligence software, which generates human-like conversation, in February 2023 in Lierde, Belgium. Experts say AI can be incredibly useful for lawyers — they just have to verify their work.
Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images
hide caption
toggle caption
Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images
Trust and verify
Charlotin has found three main issues when lawyers, or others, use AI to file court documents: The first are the fake cases created, or hallucinated, by AI chatbots.
The second is AI creates a fake quote from a real case.
The third is harder to spot, he said. That’s when the citation and case name are correct but the legal argument being cited is not actually supported by the case that is sourced, Charlotin said.
This case involving the MyPillow lawyers is just a microcosm of the growing dilemma of how courts and lawyers can strike the balance between welcoming life-changing technology and using it responsibly in court. The use of AI is growing faster than authorities can make guardrails around its use.
It’s even being used to present evidence in court, Grossman said, and to provide victim impact statements.
Earlier this year, a judge on a New York state appeals court was furious after a plaintiff, representing himself, tried to use a younger, more handsome AI-generated avatar to argue his case for him, CNN reported. That was swiftly shut down.
Despite the cautionary tales that make headlines, both Grossman and Charlotin view AI as an incredibly useful tool for lawyers and one they predict will be used in court more, not less.
Rules over how best to use AI differ from one jurisdiction to the next. Judges have created their own standards, requiring lawyers and those representing themselves in court to submit AI disclosures when it’s been used. In a few instances judges in North Carolina, Ohio, Illinois and Montana have established various prohibitions on the use of AI in their courtrooms, according to a database created by the law firm Ropes & Gray.
The American Bar Association, the national representative of the legal profession, issued its first ethical guidance on the use of AI last year. The organization warned that because these tools “are subject to mistakes, lawyers’ uncritical reliance on content created by a [generative artificial intelligence] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties.”
It continued, “Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation …”
The Advisory Committee on Evidence Rules, the group responsible for studying and recommending changes to the national rules of evidence for federal courts, has been slow to act and is still working on amendments for the use of AI for evidence.
In the meantime, Grossman has this suggestion for anyone who uses AI: “Trust nothing, verify everything.”
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children