Connect with us

Mergers & Acquisitions

Amazon weighs further investment in Anthropic to deepen AI alliance

Published

on


Amazon is weighing another multibillion-dollar investment in Anthropic to deepen a strategic alliance that the tech companies believe will provide an edge in the global competition to profit from artificial intelligence.

The Seattle-based cloud and ecommerce group has discussed plans to extend beyond the $8bn it has already ploughed into the San Francisco-based AI model builder, according to multiple people with knowledge of the talks.

A new deal would further a relationship that — according to interviews with more than a dozen Amazon and Anthropic executives, board members and investors — has become vital to both their futures.

The investment will ensure Amazon remains one of Anthropic’s largest shareholders, as it seeks to position ahead of Google which has also invested more than $3bn, while providing a bulwark against a similar multibillion dollar partnership between Microsoft and OpenAI.

It would also deepen ties as the pair collaborate on one of the world’s largest data centre projects and team up on sales of Anthropic’s technology to Amazon’s cloud computing customers.

“We quickly realised that we had many shared goals that were fundamentally critical,” said Dan Grossman, vice-president of worldwide corporate development at Amazon. “The size of the [existing investment] represents our ambition.”

The strategy of close alignment comes with risks. Microsoft’s $14bn investment into OpenAI helped the duo take an early lead in the race to commercialise AI products, but that alliance is under strain because of the ChatGPT maker’s desire to move to a for-profit model.

Anthropic was founded in 2021 by seven former OpenAI staff including siblings Daniela and Dario Amodei who left over ethical and safety concerns. It was initially a cloud computing customer before Amazon made a $1.25bn investment in September 2023.

The Amazon deal ensured Anthropic had a “reliable source of compute and investment” at a time when Microsoft was locked into an agreement with OpenAI that would have precluded it from acting as a partner, according to one of the Seattle-based group’s executives.

In June, Amazon outlined the scale of its first site for “Project Rainier”, a large-scale data centre programme that will help meet Anthropic’s computing demands. Filled with the cloud providers’ Trainium2 chips, the facilities in New Carlisle, Indiana will draw 2.2 gigawatts in power when completed, far surpassing the scale of Oracle’s ambitious 1.2GW campus for OpenAI in Abilene, Texas.

Amazon detailed at least $11bn in investment for a cluster of 16 data centres in Indiana last year, but plans for the site have since doubled.

Mike Krieger, Anthropic’s chief product officer, said it had worked “really closely” with Amazon to ensure that the Big Tech group’s Trainium2 chips were suitable for its models. “The ability to have Amazon, who is developing their own chips and has the knowhow and expertise, open to our requirements, is massive,” he said.

The two companies are already discussing plans for future sites attached to Project Rainier. “The goal is to always be way ahead of what your customers are going to need,” said David Brown, vice-president of compute at Amazon Web Services. “I call it the illusion of infinite capacity.”

While Amazon is developing its own in-house foundation models, it has sought closer ties to Anthropic than Google, which is focused on building its own powerful AI models called Gemini.

The “fair value” of Amazon’s investment in Anthropic is about $13.8bn, according to regulatory filings. Its backing came in the form of convertible notes, with only a portion turned into equity so far.

Both tech giants’ stakes are capped to keep them well below owning more than a third of Anthropic. They each have no voting rights, board seats or board observer seats. Google owns roughly 14 per cent, according to legal filings.

Anthropic’s most recent equity valuation is $61.5bn, set by investors in March, according to PitchBook.

Amazon has made other investments in AI companies, including Hugging Face and Scale AI, but Anthropic is its third-largest investment to date behind MGM Studios and Whole Food Markets.

Executives at the Seattle-based group are confident that the partnership with Anthropic would be more robust than Microsoft and OpenAI, as the start-up was structured as a public benefit corporation rather than a non-profit. Investors hold equity, unlike with OpenAI where they are beholden to a complex profit share agreement.

Anthropic has previously said that it is “not owned or dominated by a single tech giant” and has chosen to remain independent.

Yet, Amazon has manoeuvred itself to be named Anthropic’s primary cloud and training partner.

The model builder counts on Amazon’s data centres and its specialised Trainium semiconductor chips to develop and deploy large language models. However, Anthropic also uses Google’s custom AI accelerator chip — a Tensor Processing Unit (TPU) called Trillium.

Claude, meanwhile, is embedded in Amazon products such as its improved digital voice assistant Alexa+ and streaming service Prime Video.

One Anthropic investor said Amazon’s salespeople more clearly promoted the start-up’s Claude series of models to its cloud computing customers than search giant Google.

“Google pushes Gemini in every interaction, despite backing Anthropic. They will sell Gemini at every opportunity,” added the start-up’s investor. “Amazon’s default is to sell Claude.”

Google has previously said that more than 4,000 customers used Anthropic’s models on its cloud platform. The search giant declined to comment.

Atul Deo, director of Amazon Bedrock, the company’s AI app development platform, said that the company was cautious about preferring a single AI partner. “Forcing something on customers is not a good strategy,” he said, noting that an alternative provider’s models could soon be in demand.

But Kate Jensen, Anthropic’s head of revenue, said that the two companies pitched to potential customers together. “We sit down and say, you’ve already trusted Amazon with your data,” she said. “You need the world’s best model.”

Anthropic has an annual revenue run rate of more than $4bn, according to people familiar with the matter, a sliver of the $107bn AWS generated in the 2024 fiscal year.

Amazon’s decision to invest in training its own AI models, however, remains a risk for Anthropic, which relies on the tech giant to provide a robust pipeline of corporate customers which are its main revenue source.

David Luan, a former OpenAI executive, is leading the cloud provider’s pursuit of artificial general intelligence — systems that surpass human abilities — and his team has built what the company describes as “dependable AI agents” that have benchmarked better than Anthropic’s equivalent.

“There are benefits and some drawbacks to the way the relationship is structured but at the end of the day Anthropic look to us to solve a lot of their problems,” added one Amazon executive.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Mergers & Acquisitions

Childproofing the internet is a bad idea

Published

on


Stay informed with free updates

The writer is senior fellow in technology policy at the Cato Institute and adjunct professor at George Mason University’s Antonin Scalia Law School

Last month, the US Supreme Court upheld a Texas law that requires verification of a user’s age when visiting websites with pornographic content. It joins the UK’s Online Safety Act and Australia’s ban on social media use by under 16s as the latest measure aimed at keeping young people safe online.

While protecting children is the well-intentioned motivation for these laws, they are a blunt instrument applied to a nuanced problem. Instead of simply safeguarding minors, they are creating new privacy risks. 

The only way to prove that someone is not underage is to prove that they are over a certain age. This means that Texas’s requirement for verification applies not only to children and teenagers but to adult internet users too.

While the Supreme Court decision tries to limit its application to specific types of content and compares this to offline verification methods, it ignores some key differences.

First, uploading data such as a driving licence to verify age on a website is a far more involved and lasting interaction than quickly showing the same ID to an assistant when purchasing alcohol or other age-restricted products in a store.

In some cases, laws require websites and apps to keep user information for a certain amount of time. Such a trove of data can be lucrative to nefarious hackers. It can also put individuals at risk of having sensitive information about their online behaviour exposed.

Second, adults who do not have government-issued ID will be prevented from looking at internet content that they have a constitutional right to access. This is not the same as restricting offline purchases. Lack of an ID to buy alcohol does not prevent anyone from accessing information.

Advocates for verification proposals often point to alternatives that can estimate a person’s age without official ID. Biometrics can be used to assess age via a photo uploaded online. Financial or internet histories can be checked. But these alternatives are also invasive. And age estimates via photographs tend to be less accurate for certain groups of people, including those with darker skin tones.

Despite these trade-offs, age-verification proposals keep popping up around the world. And the problems they are trying to solve encompass an extremely wide range. The concerns that policymakers and parents seem to have span from the amount of time young people are spending online to their exposure to certain types of content, including pornography, depictions of eating disorders, bullying and self-harm.  

Today’s young people do have access to more information than any generation before them. And while this can provide many benefits, it can also cause worries about the ease with which they can access harmful content.

But age verification requirements risk blocking content beyond pornography. They can unintentionally restrict access to important information about sexual health and sexuality too. Additionally, the requirements for ID could make young people less safe online by requiring more detailed information — laying them open to exploitation. As with information taken from adults, this could create a honeypot of data about their online presence. They would face new risks caused by the very provisions intended to make them more safe.

While age verification laws appear well intentioned, they will create new privacy pitfalls for all internet users.

Keeping children and teenagers safe online is a problem that is best solved by parents, not policymakers.

Empowering young people to have difficult conversations and make smart choices online will provide a wider range of options to solve the problem without sacrificing privacy in the process.



Source link

Continue Reading

Mergers & Acquisitions

EU pushes ahead with AI code of practice

Published

on


Unlock the Editor’s Digest for free

The EU has unveiled its code of practice for general purpose artificial intelligence, pushing ahead with its landmark regulation despite fierce lobbying from the US government and Big Tech groups.

The final version of the code, which helps explain rules that are due to come into effect next month for powerful AI models such as OpenAI’s GPT-4 and Google’s Gemini, includes copyright protections for creators and potential independent risk assessments for the most advanced systems.

The EU’s decision to push forward with its rules comes amid intense pressure from US technology groups as well as European companies over its AI act, considered the world’s strictest regime regulating the development of the fast-developing technology.

This month the chief executives of large European companies including Airbus, BNP Paribas and Mistral urged Brussels to introduce a two-year pause, warning that unclear and overlapping regulations were threatening the bloc’s competitiveness in the global AI race.

Brussels has also come under fire from the European parliament and a wide range of privacy and civil society groups over moves to water down the rules from previous draft versions, following pressure from Washington and Big Tech groups. The EU had already delayed publishing the code, which was due in May.

Henna Virkkunen, the EU’s tech chief, said the code was important “in making the most advanced AI models available in Europe not only innovative, but also safe and transparent”.

Tech groups will now have to decide whether to sign the code, and it still needs to be formally approved by the European Commission and member states.

The Computer & Communications Industry Association, whose members include many Big Tech companies, said the “code still imposes a disproportionate burden on AI providers”.

“Without meaningful improvements, signatories remain at a disadvantage compared to non-signatories, thereby undermining the commission’s competitiveness and simplification agenda,” it said.

As part of the code, companies will have to commit to putting in place technical measures that prevent their models from generating content that reproduces copyrighted content.

Signatories also commit to testing their models for risks laid out in the AI act. Companies that provide the most advanced AI models will agree to monitor their models after they have been released, including giving external evaluators access to their most capable models. But the code does give them some leeway in identifying risks their models might pose.

Officials within the European Commission and in different European countries have been privately discussing streamlining the complicated timeline of the AI act. While the legislation entered into force in August last year, many of its provisions will only come into effect in the years to come. 

European and US companies are putting pressure on the bloc to delay upcoming rules on high-risk AI systems, such as those that include biometrics and facial recognition, which are set to come into effect in August next year.



Source link

Continue Reading

Mergers & Acquisitions

Humans must remain at the heart of the AI story

Published

on


Stay informed with free updates

The writer is co-founder, chair and CEO of Salesforce

The techno-atheists like to tell a joke.

They imagine the moment AI fully awakens and is asked, “Is there a God?” 

To which the AI replies: “There is now.”

The joke is more than just a punchline. It’s a warning that reveals something deeper: the fear that as AI begins to match human intelligence, it will no longer be a tool for humanity but our replacement.

AI is the most transformative technology in our lifetime, and we face a choice. Will it replace us, or will it amplify us? Is our future going to be scripted by autonomous algorithms in the ether, or by humans?

As the CEO of a technology company that helps customers deploy AI, I believe this revolution can usher in an era of unprecedented growth and impact. 

At the same time, I believe humans must remain at the centre of the story. 

AI has no childhood, no heart. It does not love, does not feel loss, does not suffer. And because of that, it is incapable of expressing true compassion or understanding human connection.

We do. And that is our superpower. It’s what inspires the insights and bursts of genius behind history’s great inventions. It’s what enables us to start businesses that solve problems and improve the world.

Intelligent AI agents — systems that learn, act and make decisions on our behalf — can enhance human capabilities, not displace them. The real magic lies in partnership: people and AI working together, achieving more than either could alone.

We need that magic now more than ever. Look at what we ask of doctors and nurses. Of teachers. Of soldiers. Of managers and frontline employees. Everywhere we turn, people are overwhelmed by a tsunami of rising expectations and complexity that traditional systems simply can’t keep up with.

This is why AI, for all of its uncertainties, is not optional but essential.

Salesforce is already seeing AI drive sharply increased productivity in some key functions via our platform Agentforce. Agents managed by customer service employees, for example, are resolving 85 per cent of their incoming queries. In research and development, 25 per cent of net new code in the first quarter was AI-generated. This is freeing human teams to accelerate projects and deepen relationships with customers.

The goal is to rethink the system entirely to make room for a new kind of partnership between people and machines — weaving AI into the fabric of business.

This doesn’t mean there won’t be disruption. Jobs will change, and as with every major technological shift, some will go away — and new ones will emerge. At Salesforce, we’ve experienced this first-hand: our organisation is being radically reshaped. We’re using this moment to step back in some areas — pausing much of our hiring in engineering, for example — and hiring in others. We’ve redeployed thousands of employees — one reason 51 per cent of our first-quarter hires were internal.

History tells us something important here. From the printing press to the personal computer, innovation has transformed the nature of work — and in the long run created more of it. AI is already generating new kinds of roles. Our responsibility is to guide this transition responsibly: by breaking jobs down into skills, mapping those skills to the roles of the future, and helping people move into work that’s more meaningful and fulfilling.

There’s a novel I often recommend: We Are Legion (We Are Bob) by Dennis E. Taylor. The story follows software engineer Bob Johansson, who preserves his brain and re-emerges more than 100 years after his death as a self-replicating digital consciousness. A fleet of AI “Bobs” launches across the galaxy. The book asks the question: if we reduce ourselves to code — endlessly efficient, endlessly duplicable — what do we lose? What becomes of the messy, mortal, deeply human experiences that give life meaning?

If we accept the idea that AI will take our place, we begin to write ourselves out of the future — passengers in a rocket we no longer steer. But if we choose to guide and partner with it then we can unlock a new era of human potential.

One path leads to cold, disconnected non-human intelligence. The other points to a future where AI is designed to elevate our humanity — deeper connection, imagination and empathy.

AI is not destiny. We must choose wisely. We must design intentionally. And we must keep humans at the centre of this revolution.



Source link

Continue Reading

Trending