Tools & Platforms
AI start-up Perplexity makes surprise $34.5bn bid for Google Chrome

Artificial intelligence (AI) start-up Perplexity has made a surprise $34.5bn (£25.6bn) takeover bid for Google’s Chrome internet browser.
Moving Chrome to an independent operator committed to user safety would benefit the public, Perplexity said in a letter to Sundar Pichai, the boss of Google’s owner Alphabet.
But one technology industry investor called the offer a “stunt” that is a much lower than Chrome’s true value and highlighted that it is not clear whether the platform would is even for sale.
The BBC has contacted Google for comment. The firm has not announced any plans to sell Chrome – the world’s most popular web browser with an estimated three billion-plus users.
Google’s dominance of the search engine and online advertising market has come under intense scrutiny, with the technology giant embroiled in years of legal wrangling as part of two antitrust cases.
A US federal judge is expected to issue a ruling this month that could see Google being ordered to break up its search business.
The company has said it would appeal such a ruling, saying the idea of spinning off Chrome was an “unprecedented proposal” that would harm consumers and security.
A spokesman for Perplexity told the BBC that its bid marks an “important commitment to the open web, user choice, and continuity for everyone who has chosen Chrome.”
As part of the proposed takeover, Perplexity said it would continue to have Google as the default search engine within Chrome, though users could adjust their settings.
The firm said it would also maintain and support Chromium, a widely-used open-source platform that supports Chrome and other browsers including Microsoft Edge and Opera.
Perplexity did not respond to queries about how the proposed deal would be funded. In July, it had an estimated value of $18bn.
Technology industry investor and start-up founder Heath Ahrens called Perplexity’s move a “stunt, and nowhere near Chrome’s true value, given its unmatched data and reach.”
“The offer isn’t serious, but if someone like Sam Altman or Elon Musk tripled it, they could genuinely secure dominance for their AI,” he added.
It is also not clear whether Google is considering selling the platform, Tomasz Tunguz from Theory Ventures told the BBC.
He also said the offer is a lot lower than the browser is worth “given the value of Chrome is likely significantly higher – maybe ten times more valuable than the bid or more.”
Perplexity’s app is among the rising players in the generative AI race, alongside more well-known platforms like OpenAI’s ChatGPT and Google’s Gemini.
Last month, it launched an AI-powered browser called Comet.
The company made headlines earlier this year after offering to buy the American version of TikTok, which faces a deadline in September to be sold by its Chinese owner or be banned in the US.
Perplexity has reportedly drawn interest from technology giants including Apple and Facebook-owner Meta.
Tools & Platforms
OpenAI to burn through $115B by 2029 – Computerworld

The AI emperor is not wearing any clothes — I mean, seriously, he’s as naked as a jaybird. Or to be more precise, the big-name AI companies are burning through unprecedented amounts of money this year. With no clear plan to make a profit anytime soon. (Sure, Nvidia is coining cash with its chip foundries, but the AI software companies are another matter.)
For example, we now know — thanks to analysis from The Information — that OpenAI will burn $115 billion (that’s billion with a capital B) by 2029. That’s up $80 billion from previous estimates. On top of that, OpenAI has ordered $10 billion of Broadcom’s yet-to-ship AI chips for its yet-to-break-ground proprietary data centers. Oh, and there’s the $500 billion that OpenAI and friends SoftBank, Oracle, and MGX are already committed to spending on the Stargate Project AI data centers.
It’s not just OpenAI. Meta, Amazon, Alphabet, and Microsoft will collectively spend up to $320 billion in 2025 on AI-related technologies. Amazon alone aims to invest more than $100 billion on its AI initiatives, while Microsoft will dedicate $80 billion to datacenter expansion for AI workloads. And Meta’s CEO has set an AI budget of around $60 billion for this year.
Tools & Platforms
Babylon Bee 1, California 0: Court Strikes Down Law Regulating Election-Related AI Content | American Enterprise Institute

By reducing traditional barriers of content creation, the AI revolution holds the potential to unleash an explosion in creative expression. It also increases the societal risks associated with the spread of misinformation. This tension is the subject of a recent landmark judicial decision, Babylon Bee v Bonta (hat tip to Ajit Pai, whose social media account remains an outstanding follow). The eponymous satirical news site and others challenged California’s AB 2839, which prohibited the dissemination of “materially deceptive” AI-generated audio or video content related to elections. Although the court recognized that the case presented a novel question about “synthetically edited or digitally altered” content, it struck down the law, concluding that the rise of AI does not justify a departure from long-standing First Amendment principles.
AB 2839 was California’s attempt to regulate the use of AI and other digital tools in election-related media. The law defined “materially deceptive” content as audio or visual material that has been intentionally created or altered so that a reasonable viewer would believe it to be an authentic recording. It applied specifically to depictions of candidates, elected officials, election officials, and even voting machines or ballots, where the altered content was “reasonably likely” to harm a candidate’s electoral prospects or undermine public confidence in an election. While the statute carved out exceptions for candidates making deepfakes of themselves and for satire or parody, those exceptions required prominent disclaimers stating that the content had been manipulated.
The court recognized that the electoral context raises the stakes for both parties. Because the law regulated speech on the basis of content, the court applied strict scrutiny: The law is constitutional only if it serves a compelling governmental interest and is the least restrictive means of protecting that interest. On the one hand, the court recognized that the state has a compelling interest in preserving the integrity of its election process. California noted how AI-generated robocalls purporting to be from President Biden encouraged New Hampshire voters not to go to the polls during the 2024 primary. But on the other hand, the Supreme Court has recognized that political speech occupies the “highest rung” of First Amendment protection. That tension is the opinion’s throughline: While elections justify significant regulation, they also demand the most protection for individual speech.
But it ultimately held that California struck the wrong balance. The state argued that the bill was a logical extension of traditional harmful speech regulations such as defamation or fraud. But the court ruled that the law reached much further. It did not limit liability to instances of actual harm, but to any content “reasonably likely” to cause material harm. And importantly, it did not limit recovery to actual candidate victims, but instead allowed any recipient of allegedly deceptive content to sue for damages. This private right of action deputized roving “censorship czars” across the state whose malicious or politically motivated suits risk chilling a broad swath of political expression.
Given this breadth, the court found the law’s safe harbor was insufficient. The law exempted satirical content (such as that produced by the Bee) as long as it carried a disclaimer that the content was digitally manipulated, in accordance with the act’s formatting requirements. But the court found that this compelled disclosure was itself unconstitutional, as it drowned out the plaintiff’s message: “Put simply, a mandatory disclaimer for parody or satire would kill the joke.” This was especially true in contexts such as mobile devices, where the formatting requirements meant the disclaimer would take up the entire screen—a problem that I have discussed elsewhere in the context of Federal Trade Commission disclaimer rules.
Perhaps most importantly, the court recognized the importance of counter-speech and market solutions as alternative remedies to disinformation. It credited crowd-sourced fact-checking such as X’s community notes, and AI tools such as Grok, as scalable solutions already being adopted in the marketplace. And it noted that California could fund AI educational campaigns to raise awareness of the issue or form committees to combat disinformation via counter-speech.
The court’s emphasis on private, speech-based solutions points the way forward for other policymakers wrestling with deepfakes and other AI-generated disinformation concerns. Private, market-driven solutions offer a more constitutionally sound path than empowering the state to police truth and risk chilling protected expression. The AI revolution is likely to disrupt traditional patterns of content creation and dissemination in society. But fundamental First Amendment principles are sufficiently flexible to adapt to these changes, just as they have when presented with earlier disruptive technologies. When presented with problematic speech, the first line of defense is more speech—not censorship. Efforts at the state or federal level to regulate AI-generated content should respect this principle if they are to survive judicial scrutiny.
Tools & Platforms
AI And Creativity: Hero, Villain – Or Something Far More Nuanced? – New Technology

As part of SXSW’s first tever UK edition, Lewis Silkin
brought together a packed room to hear five esharp minds –
photographer-advocate Isabelle Doran, tech founder Guy Gadney,
licensing entrepreneur Benjamin Woollams, Commercial partners Laura
Harper and Phil Hughes – wrestle with one deceptively simple
question: is AI a hero or a villain in the creative world?
Spoiler: it’s neither. Over sixty fast-paced minutes, the
panel dug into the real-world impact of generative models, the gaps
in current law and the uneasy economics facing everyone from
freelancers to broadcasters. We’ve distilled the conversation
into six take-aways that matter to anyone who creates, commissions
or monetises content.
1. Generative AI is already taking work – fast
“Generative AI is competing with creators in their
place of work,” warned Isabelle Doran, citing her
Association of Photographers’ latest survey. In September 2024,
30% of respondents had lost a commission to AI; five months later
that figure ehit 58%. The fallout runs wider than photographers.
When a shoot is cancelled, stylists, assistants and post-production
teams stand idle too – a ripple effect the panel believes
that policy-makers ignore at their peril.
2. Yet the tech also unlocks new forms of storytelling
Guy Gadney was quick to balance the gloom: “It’s a
proper tsunami in the sense of the breadth and volume that’s
changing,” he said, “but it also lets us ask
what stories we can tell now that we couldn’t
before.” His company, Charismatic AI, is building tools
that let writers craft interactive narratives at a speed and scale
unheard of two years ago. The opportunity, he argued, lies in
marrying that capability with fair economic models rather than
trying to “block the tide“.
3. The law isn’t a free-for-all – but it is
fragmenting
Laura Harper cut through the noise: “The status quo at
the moment is uncertain and it depends on what country you’re
operating in.” In the UK, copyright can subsist in
computer-generated works; in the US, it can’t. EU rules require
commercial text-and-data miners to respect opt-outs; UK law
doesn’t – yet. Add pergent notions of “fair
use” and you get a regulatory patchwork that leaves creators
guessing and investors hesitating.
4. Transparency is the missing link
Phil Hughes nailed the practical blocker: “We can’t
build sensible licensing schemes until we know what data went into
each model.” Without a statutory duty to disclose
training sets, claims for compensation – or even informed
consent – stall. Isabelle Doran backed him up, pointing to
Baroness Kidron’s amendment that would force openness via the
UK’s Data Act. The Lords have now sent that proposal back to
the Commons five times; every extra week, more unlicensed works are
scraped.
5. Collective licensing could spread the load
Inpidual artists can’t negotiate with OpenAI on equal terms,
but Benjamin Woollams sees hope in a pooled approach. “Any
sort of compensation is probably where we should start,”
he said, arguing for collective rights management to mirror how
music collecting societies handle radio play. At True Rights
he’s developing pricing tools to help influencers understand
usage clauses before they sign them – a practical step
towards standardisation in a market famous for anything but.
6. Personality rights may be the next frontier
Copyright guards expression; it doesn’t stop a model cloning
your voice, gait or mannerisms. “We need to strengthen
personality rights,” Isabelle Doran urged, echoing calls
from SAG-AFTRA and beyond. Think passing off on steroids: a legal
shield for the look, sound and biometric data that make a performer
unique. Laura Harper agreed – but reminded us that
recognition is only half the battle. Enforcement mechanisms,
cross-border by default, must follow fast.
Where does that leave us?
AI is not marching creators to the cliff edge, but it is forcing
a reckoning. The panel’s consensus was clear:
- We can’t uninvent generative tools – nor should
we. - Creators deserve both transparency and a cut of the value
chain. - Government must move quickly, or the UK risks watching
leverage, investment and talent drift overseas
As Phil Hughes put it in closing:
“We all know artificial intelligence has unlocked
extraordinary possibilities across the creative industries. The
question is whether we’re bold enough and organised enough to
make sure those possibilities pay off for the people whose
imagination feeds the machine.”
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi