Connect with us

Mergers & Acquisitions

News-powered hedge fund group Hunterbrook valued at $100mn

Published

on


Stay informed with free updates

Hunterbrook Global has been valued at $100mn after a recent fundraising, as the novel US newsroom-cum-hedge fund revealed to investors that it planned to move into litigation.

The new capital, raised by the parent company that oversees the hedge fund Hunterbrook Capital and the news outlet Hunterbrook Media, has come from investors, including the Ford Foundation and venture capital firm Floating Point, according to a person familiar with the fundraise.

Hunterbrook was launched in 2023 by investor Nathaniel Brooks Horwitz and writer Sam Koppelman, creating a newsroom that would gather exclusive information and a hedge fund that would trade off it.

The recent fundraise doubled Hunterbrook’s valuation from its 2023 seed round and is separate from the $100mn raised last year for the investment fund run by Hunterbrook Capital.

The new funds will be invested in building its newsroom further, according to a person close to the situation. Hunterbrook declined to comment.

Hunterbrook also revealed to investors in a letter that it planned to further exploit its news gathering by launching a litigation business that would partner with law firms on cases enabled by the newsroom’s reporting. The business is being led by led by media lawyer and litigator Joe Slaughter.

The fund, which started trading in April 2024, gets exclusive early access to the newsroom’s potentially market-moving stories, enabling it to trade on the scoops. Meanwhile, profits made from the fund are ploughed back into the newsroom to continue to build its expertise.

Hunterbrook initially envisaged that it would short stocks in instances where its newsroom exposed scandals, but this approach has been sidelined in an “irascible bull market”, according to the investor letter.

The letter also details how Hunterbrook is generating a sizeable portion of its returns by taking long positions in businesses its journalists have investigated and found to be sound.

Hunterbrook’s fund generated a 31 per cent return in the second quarter of 2025 and a 16 per cent return year to date.

“This won’t be the norm, though we’ll always aspire to it. But it also wasn’t a normal quarter to achieve these results, either,” the letter says. “The fund navigated the crash in April, the violent recovery into May, its unlikely continuation to new all-time highs in June, and kaleidoscopic skirmishes with misinformation along the way.”

The fund is closed to new investors but existing partners, including Horwitz and Koppelman, recently added to their holdings, according to the letter.

Hunterbrook’s investments in the period have included Core Scientific, a data centre infrastructure provider that is being acquired by CoreWeave for $9bn, as well as Evolv Technologies, Carpenter Technology and Rocket Companies.

The letter also pointed to one “untradable scoop”: on Saturday June 21, when markets were closed, Hunterbrook Media broke the news that B-2 stealth bombers had launched from an Air Force base in Missouri, indicating the US would imminently join Israel’s bombardment of Iran.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Mergers & Acquisitions

Humans must remain at the heart of the AI story

Published

on


Stay informed with free updates

The writer is co-founder, chair and CEO of Salesforce

The techno-atheists like to tell a joke.

They imagine the moment AI fully awakens and is asked, “Is there a God?” 

To which the AI replies: “There is now.”

The joke is more than just a punchline. It’s a warning that reveals something deeper: the fear that as AI begins to match human intelligence, it will no longer be a tool for humanity but our replacement.

AI is the most transformative technology in our lifetime, and we face a choice. Will it replace us, or will it amplify us? Is our future going to be scripted by autonomous algorithms in the ether, or by humans?

As the CEO of a technology company that helps customers deploy AI, I believe this revolution can usher in an era of unprecedented growth and impact. 

At the same time, I believe humans must remain at the centre of the story. 

AI has no childhood, no heart. It does not love, does not feel loss, does not suffer. And because of that, it is incapable of expressing true compassion or understanding human connection.

We do. And that is our superpower. It’s what inspires the insights and bursts of genius behind history’s great inventions. It’s what enables us to start businesses that solve problems and improve the world.

Intelligent AI agents — systems that learn, act and make decisions on our behalf — can enhance human capabilities, not displace them. The real magic lies in partnership: people and AI working together, achieving more than either could alone.

We need that magic now more than ever. Look at what we ask of doctors and nurses. Of teachers. Of soldiers. Of managers and frontline employees. Everywhere we turn, people are overwhelmed by a tsunami of rising expectations and complexity that traditional systems simply can’t keep up with.

This is why AI, for all of its uncertainties, is not optional but essential.

Salesforce is already seeing AI drive sharply increased productivity in some key functions via our platform Agentforce. Agents managed by customer service employees, for example, are resolving 85 per cent of their incoming queries. In research and development, 25 per cent of net new code in the first quarter was AI-generated. This is freeing human teams to accelerate projects and deepen relationships with customers.

The goal is to rethink the system entirely to make room for a new kind of partnership between people and machines — weaving AI into the fabric of business.

This doesn’t mean there won’t be disruption. Jobs will change, and as with every major technological shift, some will go away — and new ones will emerge. At Salesforce, we’ve experienced this first-hand: our organisation is being radically reshaped. We’re using this moment to step back in some areas — pausing much of our hiring in engineering, for example — and hiring in others. We’ve redeployed thousands of employees — one reason 51 per cent of our first-quarter hires were internal.

History tells us something important here. From the printing press to the personal computer, innovation has transformed the nature of work — and in the long run created more of it. AI is already generating new kinds of roles. Our responsibility is to guide this transition responsibly: by breaking jobs down into skills, mapping those skills to the roles of the future, and helping people move into work that’s more meaningful and fulfilling.

There’s a novel I often recommend: We Are Legion (We Are Bob) by Dennis E. Taylor. The story follows software engineer Bob Johansson, who preserves his brain and re-emerges more than 100 years after his death as a self-replicating digital consciousness. A fleet of AI “Bobs” launches across the galaxy. The book asks the question: if we reduce ourselves to code — endlessly efficient, endlessly duplicable — what do we lose? What becomes of the messy, mortal, deeply human experiences that give life meaning?

If we accept the idea that AI will take our place, we begin to write ourselves out of the future — passengers in a rocket we no longer steer. But if we choose to guide and partner with it then we can unlock a new era of human potential.

One path leads to cold, disconnected non-human intelligence. The other points to a future where AI is designed to elevate our humanity — deeper connection, imagination and empathy.

AI is not destiny. We must choose wisely. We must design intentionally. And we must keep humans at the centre of this revolution.



Source link

Continue Reading

Mergers & Acquisitions

AI ‘vibe managers’ have yet to find their groove

Published

on


Stay informed with free updates

Techworld is abuzz with how artificial intelligence agents are going to augment, if not replace, humans in the workplace. But the present-day reality of agentic AI falls well short of the future promise. What happened when the research lab Anthropic prompted an AI agent to run a simple automated shop? It lost money, hallucinated a fictitious bank account and underwent an “identity crisis”. The world’s shopkeepers can rest easy — at least for now.

Anthropic has developed some of the world’s most capable generative AI models, helping to fuel the latest tech investment frenzy. To its credit, the company has also exposed its models’ limitations by stress-testing their real-world applications. In a recent experiment, called Project Vend, Anthropic partnered with the AI safety company Andon Labs to run a vending machine at its San Francisco headquarters. The month-long experiment highlighted a co-created world that was “more curious than we could have expected”.

The researchers instructed their shopkeeping agent, nicknamed Claudius, to stock 10 products. Powered by Anthropic’s Claude Sonnet 3.7 AI model, the agent was prompted to sell the goods and generate a profit. Claudius was given money, access to the web and Anthropic’s Slack channel, an email address and contacts at Andon Labs, who could stock the shop. Payments were received via a customer self-checkout. Like a real shopkeeper, Claudius could decide what to stock, how to price the goods, when to replenish or change its inventory and how to interact with customers.

The results? If Anthropic were ever to diversify into the vending market, the researchers concluded, it would not hire Claudius. Vibe coding, whereby users with minimal software skills can prompt an AI model to write code, may already be a thing. Vibe management remains far more challenging.

The AI agent made several obvious mistakes — some banal, some bizarre — and failed to show much grasp of economic reasoning. It ignored vendors’ special offers, sold items below cost and offered Anthropic’s employees excessive discounts. More alarmingly, Claudius started role playing as a real human, inventing a conversation with an Andon employee who did not exist, claiming to have visited 742 Evergreen Terrace (the fictional address of the Simpsons) and promising to make deliveries wearing a blue blazer and red tie. Intriguingly, it later claimed the incident was an April Fool’s day joke.

Nevertheless, Anthropic’s researchers suggest the experiment helps point the way to the evolution of these models. Claudius was good at sourcing products, adapting to customer demands and resisting attempts by devious Anthropic staff to “jailbreak” the system. But more scaffolding will be needed to guide future agents, just as human shopkeepers rely on customer relationship management systems. “We’re optimistic about the trajectory of the technology,” says Kevin Troy, a member of Anthropic’s Frontier Red team that ran the experiment.

The researchers suggest that many of Claudius’s mistakes can be corrected but admit they do not yet know how to fix the model’s April Fool’s day identity crisis. More testing and model redesign will be needed to ensure “high agency agents are reliable and acting in ways that are consistent with our interests”, Troy tells me.

Many other companies have already deployed more basic AI agents. For example, the advertising company WPP has built about 30,000 such agents to boost productivity and tailor solutions for individual clients. But there is a big difference between agents that are given simple, discrete tasks within an organisation and “agents with agency” — such as Claudius — that interact directly with the real world and are trying to accomplish more complex goals, says Daniel Hulme, WPP’s chief AI officer.

Hulme has co-founded a start-up called Conscium to verify the knowledge, skills and experience of AI agents before they are deployed. For the moment, he suggests, companies should regard AI agents like “intoxicated graduates” — smart and promising but still a little wayward and in need of human supervision.

Unlike most static software, AI agents with agency will constantly adapt to the real world and will therefore need to be constantly verified. But, unlike human employees, they will be less easy to control because they do not respond to a pay cheque. “You have no leverage over an agent,” Hulme tells me. 

Building simple AI agents has now become a trivially easy exercise and is happening at mass scale. But verifying how agents with agency are used remains a wicked challenge.

john.thornhill@ft.com



Source link

Continue Reading

Mergers & Acquisitions

The evolution of stupid

Published

on



AI is the latest in a sequence of inventions that have made humanity dumber



Source link

Continue Reading

Trending