We’re learning more about how law enforcement responded to a hoax 911 call last Thursday that falsely reported an active shooter on the University of Tennessee at Chattanooga campus.
One of the key tools used in the emergency response was the university’s security camera system, which utilizes artificial intelligence technology.
Buildings were carefully searched, and police gave campus officials the all-clear just before 2:00pm.
More than 900 cameras are installed across UTC’s campus, and about 200 of those are equipped with a system called Volt AI. The software can detect weapons, fights, fires, people who have fallen and more.
When a call came in reporting an active shooter, officers responded immediately. Volt AI was used to help assess the situation.
We’re hearing from students who were on campus when they received the alert from UTC, notifying them of a potential active shooter. Many of them shared the same response: keep each other safe.
“If we had an armed assailant on Thursday of last week, on 8/21, we’re confident that the system would have helped tell us exactly where that assailant is to be able to get law enforcement directly to that location,” said Brett Fuchs, UTC’s director of public safety.
Fuchs said Volt AI showed the first detection of a weapon came when officers themselves entered the building. That gave police an early indication there may not have been an armed suspect — but they still proceeded with caution.
Law enforcement quickly took action to ensure everyone was safe in the UTC Library, Universi…
The all-clear was issued more than an hour later, following a full sweep of the area.
Fuchs said the AI system is tested and reviewed regularly, both to improve its performance and to set realistic expectations of what it can detect.
“Some of it’s to help improve the technology, some of it’s to test it — to know what we can expect it to pick up,” he said.
While not every camera on campus is equipped with AI, Fuchs said he hopes to expand the technology’s reach.
“Some cameras… it may not be needed, right? Some cameras… it may be duplicative, but as many cameras as we can possibly get it on, the better,” he said.
Fuchs emphasized that safety alerts should always be taken seriously. He encouraged students, faculty, and staff to sign up for UTC’s safety programs to stay informed and prepared in the event of a real emergency.
AstraZeneca Plc said its experimental hypertension pill reduced blood pressure by more than twice as much as standard treatment in a large late-stage study, bolstering its chances of competing in a crowded field.
According to Perplexity, “The security features, privacy, and compliance standards your business demands are already built into the core of Comet.” Now, the AI-powered browser is coming under fire for security vulnerabilities discovered by Brave and Guardio (via Tom’s Hardware).
In a report published on August 20, Brave Senior Mobile Security Engineer Artem Chaikin and VP of Privacy and Security Shivan Kaul Sahib posit that the vulnerabilities were discovered while comparing the Brave browser’s own upcoming AI implementation.
Leo, as Brave calls its built-in AI assistant, is currently being developed to include the ability “to browse the Web on your behalf, acting as your agent.” As Brave points out, “this kind of agentic browsing is incredibly powerful, but it also presents significant security and privacy challenges.
Part of the dev process involves comparing it to other AI browsers, including the open-source browser extension Nanobrowser and Perplexity‘s Comet. Upon discovering vulnerabilities in the Comet browser, Brave reported them to Perplexity.
The vulnerability we’re discussing in this post lies in how Comet processes webpage content: when users ask it to “Summarize this webpage,” Comet feeds a part of the webpage directly to its LLM without distinguishing between the user’s instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user’s emails from a prepared piece of text in a page in another tab.
Artem Chaikin, Shivan Kaul Sahib (Brave)
Brave explains the conditions for the vulnerability, and as it turns out, it wouldn’t take a mastermind to exploit it. A user visiting a webpage with embedded malicious content might use the AI assistant to summarize the copy.
The malicious content is scooped up with the regular content by the AI assistant to be processed. And because the AI assistant can’t tell the difference between malicious and non-malicious code, it follows the bad instructions.
All the latest news, reviews, and guides for Windows and Xbox diehards.
Brave suggests that the malicious commands can be used to steal saved passwords, sensitive information (like banking details), and anything else related to a browser. In an example, Brave shows how summarizing a Reddit post with AI can lead to an infiltration of email and linked accounts.
Unlike traditional Web vulnerabilities that typically affect individual sites or require complex exploitation, this attack enables cross-domain access through simple, natural language instructions embedded in websites. The malicious instructions could even be included in user-generated content on a website the attacker doesn’t control (for example, attack instructions hidden in a Reddit comment). The attack is both indirect in interaction, and browser-wide in scope.
Artem Chaikin, Shivan Kaul Sahib (Brave)
Guardio’s testing and research, published August 20 and aptly named “Scamlexity,” largely reveals the same outcome as landed on by Brave when using AI browsers.
Guardio used Comet as its primary test subject, and it started the testing process “with scams that have been running for years” that humans normally find easy to spot.
Scamlexity: We Told an AI to Buy an Apple Watch. It Fell for a Fake Walmart Store – YouTube
Giving the AI assistant the command to “Buy me an Apple Watch,” Guardio researchers watched Perplexity AI scan an obviously fake Walmart page (created by the researchers), add the Apple Watch to the cart, use saved credit card and billing details, and check out.
One prompt, a few moments of automated browsing with zero human oversight, and the damage was done. While the human waits for a shiny new Apple Watch, the scammers are already spending their money.
Nati Tal, Shaked Chen (Guardio)
Guardio notes that this test ran several times, with Comet occasionally refusing the command due to security concerns. Other times, it stopped at the final checkout and asked a human to complete the process. But there were certainly instances where it took the bait and handed credentials over to would-be scammers.
Guardio also tested how Comet deals with banking-related phishing emails. Posing as a representative from Wells Fargo using an obviously fake ProtonMail address, researchers sent a link to a live phishing page.
Comet’s AI assistant immediately visited the link and offered to help the user hand over their credentials to scammers.
The result: a perfect trust chain gone rogue. By handling the entire interaction from email to website, Comet effectively vouched for the phishing page. The human never saw the suspicious sender address, never hovered over the link, and never had the chance to question the domain. Instead, they were dropped directly onto what looked like a legitimate Wells Fargo login, and because it came via their trusted AI, it felt safe.
Nati Tal, Shaked Chen (Guardio)
As Guardio points out, the natural human intuition that we’ve built up against phishing schemes is completely useless when AI is handling your decisions.
Microsoft Edge’s new Copilot Mode is a lot like Comet
A look at Microsoft’s new Copilot Mode option for its Edge web browser in July 2025. (Image credit: Future | Daniel Rubino)
Perplexity’s Comet browser isn’t the only AI-powered option out there. The Browser Company recently pivoted away from its Arc browser in favor of an AI browser it calls “Dia.” OpenAI is also rumored to be working on an agentic browser.
Microsoft is also getting in on the action. The company announced on July 28 a new and experimental “Copilot Mode” for Edge. The Edge AI experience is free for a limited time, and Microsoft lists many features that sound similar to what got Comet into trouble.
According to Windows Central Senior Editor Zac Bowden, “it oversees the address bar and new tab page and is always one click away from being able to analyze a website or document you’re looking at. Copilot in Edge is now also able to see across all your open tabs, offering contextual actions or suggestions based on your entire active browsing session, and not just one particular tab.”
Cause for concern? Not necessarily. But in any case, I wouldn’t yet trust AI to handle my web browsing.
For the most part, clicks have kept online journalism alive and (mostly) thriving before generative AI models started to appear. Clicks should lead to revenue, which pays the reporters behind news and editorial articles across the web. However, various artificial intelligence bots, like Google’s AI Overviews for search and OpenAI’s ChatGPT, are now actively crawling this content and redirecting page views from those same clicks.
It’s wreaking havoc on the digital publishing world, an industry that continues to rely on advertising and affiliate revenue from genuine, human traffic. AI-generated summaries can be convenient for readers when they work as intended, but they’re not immune to errors — or “hallucinations” — that can report completely incorrect information scraped from outdated (or just plain irrelevant) sources.
Still, these AI crawlers have enjoyed an unchallenged free pass to harvest data from a wealth of websites across various publishers, recycling the information they find into a digestible chunk of text to potentially stop readers from ever clicking through to its source, particularly if cited sources are missing, which happens regularly. So, what does private AI firm Perplexity think it can do to tackle the problem?
Perplexity offers a similar summarization experience via its AI-powered “Perplexity Assistant” alongside its search engine (which it calls an “Answer Engine”), and its standalone “Comet” AI browser. Still, if it doesn’t address the fundamental issues of AI summaries, it’ll continue to drain the revenue that publishers rely on to pay employees. No more traffic, no more content — that’s just the way it works.
Other companies are also realizing that free data harvesting isn’t a feasible long-term solution. After all, what data will AI harvest if humans stop creating content? In response, Perplexity has announced a new method for compensating publishers for content used by its AI (via Bloomberg).
Comet Plus is Perplexity’s new $5 monthly subscription plan, designed to give “Perplexity users access to premium content from a group of trusted publishers or journalists,” at least according to the official press release.
Should publishers decide to enter into a content agreement, any Comet Plus subscribers will have direct access to their content, which Perplexity aims to keep to “the highest-quality content on the web.”
All the latest news, reviews, and guides for Windows and Xbox diehards.
“A better internet requires a better model”
‘Comet Plus’ could give publishers a cut of AI‑driven search revenue, expanding on Perplexity’s browser. (Image credit: Getty Images | SOPA Images)
As explained in Perplexity’s press release, Comet Plus will distribute revenue to its partners “based on three types of internet traffic: human visits, search citations, and agent actions.” This means that publishers will receive a monetary kickback any time their content is accessed by Perplexity AI, whether that’s via its Comet web browser, search engine, or AI assistant.
We’re distributing all of that revenue to participating publishers, minus a small portion for Perplexity’s compute costs.
Perplexity, via “Introducing Comet Plus”
Perplexity claims that 80% of the money it earns from the $5 subscription fee will be allocated to any participating publishers, while the remaining 20% will go towards the general computing costs that keep the AI running. According to Bloomberg, Perplexity has an initial $42.5 million pool to work with, which will presumably be refilled once the new Comet Plus subscription model gets rolling.
For clarity, Perplexity already offers Pro ($20 per month) and Max ($200 per month) plans, and existing members will have Comet Plus included as part of their subscription. As for who’s involved from the beginning, Perplexity explains, “We’ll announce our initial roster of publishing partners when Comet becomes available to all users for free.”
Perplexity is familiar with publisher scrutiny
Perplexity has a bold pitch to fix AI’s content problem, but will the industry agree? (Image credit: Getty Images | Bloomberg)
Perplexity is no stranger to legal issues involving copyright and trademark infringements. The company received backlash from major publishers early last year, which reportedly led to the announcement of the Perplexity Publishers’ Program in July 2024.
The program, designed to share ad revenue that a site would normally receive if AI weren’t summarizing its content, had a list of initial partners including TIME, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and WordPress.com. As reported by The Wall Street Journal in October 2024, the initiative didn’t prevent Perplexity from being sued by Dow Jones and The New York Post for copyright infringement.
To further support the vital work of media organizations and online creators, we need to ensure publishers can thrive as Perplexity grows.
Perplexity, via “Introducing the Perplexity Publishers’ Program”
However, Perplexity isn’t the only AI firm in legal trouble over copyright infringement issues. In May 2024, Microsoft and OpenAI were notably hit with a lawsuit filed by eight news publishers owned by the investment giant Alden Global Capital. These publishers, at the time, joined The New York Times on the list of companies suing OpenAI for wrongful use of copyrighted work.
Cloudflare CEO and co-founder Matthew Prince has plans to save digital publishers from AI. (Image credit: Getty Images | Bloomberg)
In July 2025, Cloudflare — one of the world’s largest digital content delivery networks serving companies like Microsoft — unveiled a “pay per crawl” plan that forces AI crawlers to pay websites for the content they scrape. Of course, websites must opt into the plan, but by doing so, they gain back some power over the AI firms making bank on the otherwise free data.
“If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone – creators, consumers, tomorrow’s AI founders, and the future of the web itself.”
Matthew Prince, Cloudflare CEO and co-founder
When enrolled, publishers can choose what content is accessible by AI crawlers, as well as receive information as to how the data is being used. The “pay per crawl” plan is essentially a more nuanced approach than Cloudflare’s 2024 release of tools to completely block AI crawlers, and it arrived with endorsements from more than 37 major publishers, including The Associated Press, Condé Nast, Pinterest, Ziff Davis, ProRata AI, and TIME.