Connect with us

Mergers & Acquisitions

Amazon weighs further investment in Anthropic to deepen AI alliance

Published

on


Amazon is weighing another multibillion-dollar investment in Anthropic to deepen a strategic alliance that the tech companies believe will provide an edge in the global competition to profit from artificial intelligence.

The Seattle-based cloud and ecommerce group has discussed plans to extend beyond the $8bn it has already ploughed into the San Francisco-based AI model builder, according to multiple people with knowledge of the talks.

A new deal would further a relationship that — according to interviews with more than a dozen Amazon and Anthropic executives, board members and investors — has become vital to both their futures.

The investment will ensure Amazon remains one of Anthropic’s largest shareholders, as it seeks to position ahead of Google which has also invested more than $3bn, while providing a bulwark against a similar multibillion dollar partnership between Microsoft and OpenAI.

It would also deepen ties as the pair collaborate on one of the world’s largest data centre projects and team up on sales of Anthropic’s technology to Amazon’s cloud computing customers.

“We quickly realised that we had many shared goals that were fundamentally critical,” said Dan Grossman, vice-president of worldwide corporate development at Amazon. “The size of the [existing investment] represents our ambition.”

The strategy of close alignment comes with risks. Microsoft’s $14bn investment into OpenAI helped the duo take an early lead in the race to commercialise AI products, but that alliance is under strain because of the ChatGPT maker’s desire to move to a for-profit model.

Anthropic was founded in 2021 by seven former OpenAI staff including siblings Daniela and Dario Amodei who left over ethical and safety concerns. It was initially a cloud computing customer before Amazon made a $1.25bn investment in September 2023.

The Amazon deal ensured Anthropic had a “reliable source of compute and investment” at a time when Microsoft was locked into an agreement with OpenAI that would have precluded it from acting as a partner, according to one of the Seattle-based group’s executives.

In June, Amazon outlined the scale of its first site for “Project Rainier”, a large-scale data centre programme that will help meet Anthropic’s computing demands. Filled with the cloud providers’ Trainium2 chips, the facilities in New Carlisle, Indiana will draw 2.2 gigawatts in power when completed, far surpassing the scale of Oracle’s ambitious 1.2GW campus for OpenAI in Abilene, Texas.

Amazon detailed at least $11bn in investment for a cluster of 16 data centres in Indiana last year, but plans for the site have since doubled.

Mike Krieger, Anthropic’s chief product officer, said it had worked “really closely” with Amazon to ensure that the Big Tech group’s Trainium2 chips were suitable for its models. “The ability to have Amazon, who is developing their own chips and has the knowhow and expertise, open to our requirements, is massive,” he said.

The two companies are already discussing plans for future sites attached to Project Rainier. “The goal is to always be way ahead of what your customers are going to need,” said David Brown, vice-president of compute at Amazon Web Services. “I call it the illusion of infinite capacity.”

While Amazon is developing its own in-house foundation models, it has sought closer ties to Anthropic than Google, which is focused on building its own powerful AI models called Gemini.

The “fair value” of Amazon’s investment in Anthropic is about $13.8bn, according to regulatory filings. Its backing came in the form of convertible notes, with only a portion turned into equity so far.

Both tech giants’ stakes are capped to keep them well below owning more than a third of Anthropic. They each have no voting rights, board seats or board observer seats. Google owns roughly 14 per cent, according to legal filings.

Anthropic’s most recent equity valuation is $61.5bn, set by investors in March, according to PitchBook.

Amazon has made other investments in AI companies, including Hugging Face and Scale AI, but Anthropic is its third-largest investment to date behind MGM Studios and Whole Food Markets.

Executives at the Seattle-based group are confident that the partnership with Anthropic would be more robust than Microsoft and OpenAI, as the start-up was structured as a public benefit corporation rather than a non-profit. Investors hold equity, unlike with OpenAI where they are beholden to a complex profit share agreement.

Anthropic has previously said that it is “not owned or dominated by a single tech giant” and has chosen to remain independent.

Yet, Amazon has manoeuvred itself to be named Anthropic’s primary cloud and training partner.

The model builder counts on Amazon’s data centres and its specialised Trainium semiconductor chips to develop and deploy large language models. However, Anthropic also uses Google’s custom AI accelerator chip — a Tensor Processing Unit (TPU) called Trillium.

Claude, meanwhile, is embedded in Amazon products such as its improved digital voice assistant Alexa+ and streaming service Prime Video.

One Anthropic investor said Amazon’s salespeople more clearly promoted the start-up’s Claude series of models to its cloud computing customers than search giant Google.

“Google pushes Gemini in every interaction, despite backing Anthropic. They will sell Gemini at every opportunity,” added the start-up’s investor. “Amazon’s default is to sell Claude.”

Google has previously said that more than 4,000 customers used Anthropic’s models on its cloud platform. The search giant declined to comment.

Atul Deo, director of Amazon Bedrock, the company’s AI app development platform, said that the company was cautious about preferring a single AI partner. “Forcing something on customers is not a good strategy,” he said, noting that an alternative provider’s models could soon be in demand.

But Kate Jensen, Anthropic’s head of revenue, said that the two companies pitched to potential customers together. “We sit down and say, you’ve already trusted Amazon with your data,” she said. “You need the world’s best model.”

Anthropic has an annual revenue run rate of more than $4bn, according to people familiar with the matter, a sliver of the $107bn AWS generated in the 2024 fiscal year.

Amazon’s decision to invest in training its own AI models, however, remains a risk for Anthropic, which relies on the tech giant to provide a robust pipeline of corporate customers which are its main revenue source.

David Luan, a former OpenAI executive, is leading the cloud provider’s pursuit of artificial general intelligence — systems that surpass human abilities — and his team has built what the company describes as “dependable AI agents” that have benchmarked better than Anthropic’s equivalent.

“There are benefits and some drawbacks to the way the relationship is structured but at the end of the day Anthropic look to us to solve a lot of their problems,” added one Amazon executive.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Mergers & Acquisitions

Elon Musk is still the Tesla wild card

Published

on


Unlock the Editor’s Digest for free

Here we go again. That must have been the first thought on the minds of many Tesla shareholders this week as Elon Musk waded back into the political fray, declaring his intention to launch a third party to rival the Republicans and Democrats.

It is less than two months since Musk’s moonlighting for Donald Trump’s administration led a group of Tesla shareholders to call for their chief executive to devote at least 40 hours a week to his day job, and the latest distraction wiped 7 per cent from the stock price on Monday. Musk was unmoved. He told one analyst who suggested the board should tie his pay to the time he spends at work to “shut up”.

But at a time when Tesla is facing sagging sales and mounting competition, anxiety is on the rise and activists are again urging the company’s board to hold its CEO to account. The financial squeeze has raised a question over the carmaker’s heavy investments: Despite a severe cut to capital spending in the latest quarter, free cash flow still amounted to only about half its quarterly average over the previous three years.

Viewed through the lens of the company’s stock price, however, Tesla’s shareholders would seem to have little reason to feel blue. True, much of the euphoria that pumped up the shares following Trump’s re-election has leaked away. But they are still up 15 per cent since the election, handily outperforming the wider market. Tesla’s market cap still dwarfs the rest of the car industry, even though it only accounts for about 2 per cent of global auto sales.

The Musk effect still underpins Tesla’s market cap. The shareholders who have pumped up its stock price are fixated on the technology future that he has conjured up, not the electric car business that is the company’s bread and butter today.

Morgan Stanley, for instance, estimated Tesla’s auto business accounts for less than a fifth of the company’s potential value. Most of the rest depends on its cars achieving full autonomy: After that, it can start to rake in fees from running a network of robotaxis, while also cashing in on the software and services the company’s customers will use once they no longer need to keep their attention on the road.

Full autonomy has been a long time coming. It is nine years since Musk first laid out his robotaxi plans. But he knows how to keep the futuristic vision alive — and make it one that only he can deliver. This week, for instance, he promised that Grok, the large language model from another of his companies, xAI, would soon be embedded in Tesla vehicles — a taste of things to come, when artificial intelligence transforms the experience in robot cars.

Could anyone else persuade investors to suspend their scepticism for so long? The huge Musk premium in Tesla’s shares is an extreme version of Silicon Valley founder syndrome, the belief that only a company’s founder has the vision, and the authority, to pursue truly groundbreaking new ideas (Musk wasn’t around at Tesla’s actual founding, though he was an early investor and became a member of the board soon after). 

Rubbing more salt into the wounds of shareholder activists this week was the revelation that Tesla had failed to meet a legal requirement to hold its annual shareholder meeting on time. The event will now take place in November, nearly four months late.

For boardroom experts such as Nell Minow who have long complained about Musk’s approach to governance and the response of Tesla’s board, this amounted to open contempt for normal corporate transparency: “This is one where he’s really backed himself into a corner. The requirements are very clear.”

Musk told Tesla shareholders before news of his plans for a third party broke that he would give the company much more of his attention. But there are other things that Tesla’s directors could be doing to assuage investor’s worries. One would be to work with him to rebuild Tesla’s executive ranks, which were depleted by another senior departure last week, as well as laying out a long-term succession plan.

Another would be to solve the mess caused by a Delaware court’s rejection of Musk’s $56bn stock compensation plan. Musk has warned he might lose interest in Tesla if he is not given a larger ownership stake.

Who knows, maybe Tesla’s directors could manage to organise annual meetings on time in future. The one thing they will probably never do, though, is prevent their CEO from blindsiding his own shareholders the next time he gets carried away with an idea that has nothing to do with electric cars.

richard.waters@ft.com



Source link

Continue Reading

Mergers & Acquisitions

Childproofing the internet is a bad idea

Published

on


Stay informed with free updates

The writer is senior fellow in technology policy at the Cato Institute and adjunct professor at George Mason University’s Antonin Scalia Law School

Last month, the US Supreme Court upheld a Texas law that requires verification of a user’s age when visiting websites with pornographic content. It joins the UK’s Online Safety Act and Australia’s ban on social media use by under 16s as the latest measure aimed at keeping young people safe online.

While protecting children is the well-intentioned motivation for these laws, they are a blunt instrument applied to a nuanced problem. Instead of simply safeguarding minors, they are creating new privacy risks. 

The only way to prove that someone is not underage is to prove that they are over a certain age. This means that Texas’s requirement for verification applies not only to children and teenagers but to adult internet users too.

While the Supreme Court decision tries to limit its application to specific types of content and compares this to offline verification methods, it ignores some key differences.

First, uploading data such as a driving licence to verify age on a website is a far more involved and lasting interaction than quickly showing the same ID to an assistant when purchasing alcohol or other age-restricted products in a store.

In some cases, laws require websites and apps to keep user information for a certain amount of time. Such a trove of data can be lucrative to nefarious hackers. It can also put individuals at risk of having sensitive information about their online behaviour exposed.

Second, adults who do not have government-issued ID will be prevented from looking at internet content that they have a constitutional right to access. This is not the same as restricting offline purchases. Lack of an ID to buy alcohol does not prevent anyone from accessing information.

Advocates for verification proposals often point to alternatives that can estimate a person’s age without official ID. Biometrics can be used to assess age via a photo uploaded online. Financial or internet histories can be checked. But these alternatives are also invasive. And age estimates via photographs tend to be less accurate for certain groups of people, including those with darker skin tones.

Despite these trade-offs, age-verification proposals keep popping up around the world. And the problems they are trying to solve encompass an extremely wide range. The concerns that policymakers and parents seem to have span from the amount of time young people are spending online to their exposure to certain types of content, including pornography, depictions of eating disorders, bullying and self-harm.  

Today’s young people do have access to more information than any generation before them. And while this can provide many benefits, it can also cause worries about the ease with which they can access harmful content.

But age verification requirements risk blocking content beyond pornography. They can unintentionally restrict access to important information about sexual health and sexuality too. Additionally, the requirements for ID could make young people less safe online by requiring more detailed information — laying them open to exploitation. As with information taken from adults, this could create a honeypot of data about their online presence. They would face new risks caused by the very provisions intended to make them more safe.

While age verification laws appear well intentioned, they will create new privacy pitfalls for all internet users.

Keeping children and teenagers safe online is a problem that is best solved by parents, not policymakers.

Empowering young people to have difficult conversations and make smart choices online will provide a wider range of options to solve the problem without sacrificing privacy in the process.



Source link

Continue Reading

Mergers & Acquisitions

EU pushes ahead with AI code of practice

Published

on


Unlock the Editor’s Digest for free

The EU has unveiled its code of practice for general purpose artificial intelligence, pushing ahead with its landmark regulation despite fierce lobbying from the US government and Big Tech groups.

The final version of the code, which helps explain rules that are due to come into effect next month for powerful AI models such as OpenAI’s GPT-4 and Google’s Gemini, includes copyright protections for creators and potential independent risk assessments for the most advanced systems.

The EU’s decision to push forward with its rules comes amid intense pressure from US technology groups as well as European companies over its AI act, considered the world’s strictest regime regulating the development of the fast-developing technology.

This month the chief executives of large European companies including Airbus, BNP Paribas and Mistral urged Brussels to introduce a two-year pause, warning that unclear and overlapping regulations were threatening the bloc’s competitiveness in the global AI race.

Brussels has also come under fire from the European parliament and a wide range of privacy and civil society groups over moves to water down the rules from previous draft versions, following pressure from Washington and Big Tech groups. The EU had already delayed publishing the code, which was due in May.

Henna Virkkunen, the EU’s tech chief, said the code was important “in making the most advanced AI models available in Europe not only innovative, but also safe and transparent”.

Tech groups will now have to decide whether to sign the code, and it still needs to be formally approved by the European Commission and member states.

The Computer & Communications Industry Association, whose members include many Big Tech companies, said the “code still imposes a disproportionate burden on AI providers”.

“Without meaningful improvements, signatories remain at a disadvantage compared to non-signatories, thereby undermining the commission’s competitiveness and simplification agenda,” it said.

As part of the code, companies will have to commit to putting in place technical measures that prevent their models from generating content that reproduces copyrighted content.

Signatories also commit to testing their models for risks laid out in the AI act. Companies that provide the most advanced AI models will agree to monitor their models after they have been released, including giving external evaluators access to their most capable models. But the code does give them some leeway in identifying risks their models might pose.

Officials within the European Commission and in different European countries have been privately discussing streamlining the complicated timeline of the AI act. While the legislation entered into force in August last year, many of its provisions will only come into effect in the years to come. 

European and US companies are putting pressure on the bloc to delay upcoming rules on high-risk AI systems, such as those that include biometrics and facial recognition, which are set to come into effect in August next year.



Source link

Continue Reading

Trending