Connect with us

Tools & Platforms

Tech Giants Push Policy Power

Published

on


A group of tech leaders and artificial intelligence companies announced the creation of Leading the Future (LTF), a new organization designed to, in its words, “ensure the United States remains the global leader in AI by advancing a clear, high-level policy agenda at the federal and state levels and serving as the political and policy center of gravity for the AI industry.” The industry is no longer happy to shape policy through think tanks, white papers, and voluntary commitments. It is building a political influence infrastructure.

Who Is Behind LTF

The coalition includes powerful venture capital firms like Andreessen Horowitz, investors such as Ron Conway (one of Silicon Valley’s super angels with early investments in Facebook, Google, Airbnb and Reddit), Joe Lonsdale (Palantir cofounder and an early executive at Clarium Capital, Peter Thiel’s hedge fund), Greg Brockman (OpenAI cofounder and current president) and his wife Anna Brockman. Even though the announcement is short on specific names, it indicates the participation from leading firms, including Perplexity.

Their motivations are clear in their intent to promote policies to advance the economic benefits of the technology and oppose efforts seen as limiting and delaying its development in the US. They frame the stakes in AI as not only commercial but also geopolitical. With Washington and Beijing locked in a struggle over compute power, export controls, and data supply chains, tech leaders want a direct line into state capitals and the halls of Congress.

Earlier lobbying by the internet sector focused on shaping policy through public campaigns, portraying themselves as defenders of the users, internet freedom or innovation. They often leaned on trade associations. Differently, LTF brands itself as an independent political entity. The initiative is a well-funded, centralized advocacy effort positioned to shape the future direction of tech policy in the country. It resembles historical efforts in business, food, tobacco, pharma, and other sectors with well-coordinated lobbying and electioneering to secure favorable outcomes.

Lessons from Web 2.0

This is not the first time Silicon Valley has built influence in Washington. In the late 2000s, as regulators debated privacy, antitrust, and liability protections, internet companies expanded their lobbying spend. Google went from negligible activity in the early 2000s to being among the top corporate lobbyists by the early 2010s. Facebook followed suit, building networks of state and federal lobbyists while fighting attempts to tighten rules on data collection.

Those efforts were defensive, aimed at forestalling oversight that might slow growth. Silicon Valley’s attitude toward Washington during Web 2.0 was generally one of avoidance, with tech leaders’ preference for minimal governance and free-market growth. Most companies neglected formal lobbying until faced with scrutiny, potential regulation or in response to crises. The relationship was characterized by mutual unfamiliarity, with many in DC underestimating the tech sector’s potential impact on policy, and tech companies believing they could bypass government oversight by focusing solely on innovation.

By contrast, LTF presents itself as offensive: it wants to shape an affirmative agenda and frame the policy debate itself.

Regulatory Capture and AI

Economists and legal scholars have long warned about the dangers of industries capturing the agencies tasked with regulating them. George Stigler, in his seminal 1971 essay The Theory of Economic Regulation, argued that “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit”. He introduced the concept of regulatory capture and shifted the understanding of regulation from the public interest model to a rational business choice. One of his insights was that companies often prefer regulatory control over subsidies. Rules that restrict entry, shape market structure, or favor complements can create more lasting advantage than direct government handouts.

Stephen Breyer, writing in Regulation and Its Reform (1984), documented the recurring pattern of regulatory failure in America: high costs, low returns, procedural gridlock, and unpredictability. Cass Sunstein added a twist in his 1990 essay Paradoxes of the Regulatory State: sometimes well-intentioned regulation backfires, producing the opposite of its intended effect.

Silicon Valley Bank’s 2023 collapse, the second largest in U.S. history, resulted from risky management, overinvestment in long-term bonds losing value as rates rose, and a rapid $42 billion bank run. This crisis is an example of how regulatory capture and policy changes, like the post-2018 rollback of Dodd-Frank provisions, can backfire. Regulatory failures included delayed, insufficient oversight due to weakened post-2018 rules, procedural gridlock, and unpredictability.

These perspectives suggest that as AI evolves, the risk is not just over- or under-regulation, but that industry itself will be the architect of the rules. AI offers fertile ground for capture. The technology is complex, opaque, and evolving quickly. Regulators often lack the expertise or resources to challenge the claims of leading labs. This creates an asymmetry: the firms that dominate model training are also those most capable of defining the safety benchmarks, compliance metrics, and standards of responsible AI.

Money in Politics Today

The timing of LTF’s launch is no accident. The Supreme Court’s Citizens United decision in 2010 opened the door to unlimited corporate spending on political speech through Super PACs and 501(c)(4) “social welfare” groups. These entities can raise and spend vast sums, often with limited transparency. Tech leaders are familiar with these vehicles, and crypto companies have used them aggressively in the 2024 election cycle.

By creating LTF as a political hub, the sector signals it intends to play at the same level as defense contractors, pharmaceutical giants, and oil companies. The group can funnel money into congressional races, shape ballot initiatives, and build permanent influence networks. And because AI touches multiple policy domains—national security, labor, education, healthcare—the scope of lobbying is potentially broader than any prior technology sector campaign.

The sums at stake are enormous. Training frontier models requires billions of dollars in chips and energy. Securing government contracts for AI in defense, intelligence, and healthcare could yield recurring revenue streams. In this context, spending hundreds of millions on political influence is rational, and perhaps necessary, for firms seeking to entrench their market position.

Possible Futures for AI Policy

The creation of LTF raises the question: Is AI governance going to follow a pattern of capture, or can policymakers create structures to resist it?

On one path, industry sets the rules. Companies use their clout to define the pathways that align with their business models. They shape federal preemption laws that limit state experimentation. They fund think tanks and university programs that validate their frameworks. This would mirror what Stigler described as the normal course of regulation: industries acquiring and shaping the state’s coercive power for their own benefit.

On another path, policymakers build more resilient institutions. Breyer’s framework suggests starting with clear objectives, examining alternative methods, and choosing the least intrusive regulatory form. Sunstein warns against paradoxes, where well-meaning but rigid rules lead to enforcement paralysis. Applied to AI, this means balancing innovation with safeguards, ensuring that agencies have the expertise to evaluate claims, and creating accountability mechanisms that cannot be dominated by a handful of firms.

Will AI policy become another case study in capture or a demonstration that democratic institutions can adapt to a general-purpose technology? From railroads to telecoms to energy, industries with concentrated wealth and technical expertise have usually succeeded in bending rules to their favor. But AI also raises existential concerns, from misinformation to labor disruption to military use, that broaden the coalition demanding oversight.

The launch of Leading the Future formalizes what had been implicit: AI is not just a technological race but also a contest over policy and influence. The outcome will depend on whether policymakers heed the lessons of Breyer, Stigler, and Sunstein or repeat the familiar cycle of regulation designed by and for the regulated.

Money will play a decisive role, as it always has in American politics. But the stakes in AI are larger than market share.



Source link

Tools & Platforms

Anthropic’s Claude restrictions put overseas AI tools backed by China in limbo

Published

on


An abrupt decision by American artificial intelligence firm Anthropic to restrict service to Chinese-owned entities anywhere in the world has cast uncertainty over some Claude-dependent overseas tools backed by China’s tech giants.

After Anthropic’s notice on Friday that it would upgrade access restrictions to entities “more than 50 per cent owned … by companies headquartered in unsupported regions” such as China, regardless of where they are, Chinese users have fretted over whether they could still access the San Francisco-based firm’s industry-leading AI models.

While it remains unknown how many entities could be affected and how the restrictions would be implemented, anxiety has started to spread among some users.

Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.

Singapore-based Trae, an AI-powered code editor launched by Chinese tech giant ByteDance for overseas users, is a known user of OpenAI’s GPT and Anthropic’s Claude models. A number of users of Trae have raised the issue of refunds to Trae staff on developer platforms over concerns that their access to Claude would no longer be available.

Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP alt=Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP>

A Trae manager responded by saying that Claude was still available, urging users not to consider refunds “for the time being”. The company had just announced a premium “Max Mode” on September 2, which boasted access to significantly more powerful coding abilities “fully supported” by Anthropic’s Claude models.

Other Chinese tech giants offer Claude on their coding agents marketed to international users, including Alibaba Group Holding’s Qoder and Tencent Holdings’ CodeBuddy, which is still being beta tested. Alibaba owns the South China Morning Post.

ByteDance and Trae did not respond to requests for comment.

Amid the confusion, some Chinese AI companies have taken the opportunity to woo disgruntled users. Start-up Z.ai, formerly known as Zhipu AI, said in a statement on Friday that it was offering special offers to Claude application programming interface users to move over to its models.

Anthropic’s decision to restrict access to China-owned entities is the latest evidence of an increasingly divided AI landscape.

In China, AI applications and tools for the domestic market are almost exclusively based on local models, as the government has not approved any foreign large language model for Chinese users.

Anthropic faced pressure to take action as a number of Chinese companies have established subsidiaries in Singapore to access US technology, according to a report by The Financial Times on Friday.

Anthropic’s flagship Claude AI models are best known for their strong coding capabilities. The company’s CEO Dario Amodei has repeatedly called for stronger controls on exports of advanced US semiconductor technology to China.

Anthropic completed a US$13 billion funding round in the past week that tripled its valuation to US$183 billion. On Wednesday, the company said its software development tool Claude Code, launched in May, was generating more than US$500 million in run-rate revenue, with usage increasing more than tenfold in three months.

The firm’s latest Claude Opus 4.1 coding model achieved an industry-leading score of 74.5 per cent on SWE-bench Verified – a human-validated subset of the large language model benchmark, SWE-bench, that is supposed to more reliably evaluate AI models’ capabilities.

This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP’s Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.

Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.





Source link

Continue Reading

Tools & Platforms

‘Please join the Tesla silicon team if you want to…’: Elon Musk offers job as he announces ‘epic’ AI chip

Published

on


Elon Musk has announced a major step forward for Tesla‘s chip development, confirming a ‘great design review’ for the company’s AI5 chip. The CEO made the announcement on X, signaling Tesla’s intensified push into custom semiconductors amid a fierce global competition, and also offered job to engineers at Tesla’s silicon team.According to Musk, the AI5 chip is set to be ‘epic,’ and the upcoming AI6 has a ‘shot at being the best by AI chip by far.’“Just had a great design review today with the Tesla AI5 chip design team! This is going to be an epic chip. And AI6 to follow has a shot at being the best by AI chip by far,” Musk said in a post on X.Musk revealed that Tesla’s silicon strategy has been streamlined. The company is moving from developing two separate chip architectures to focusing all of its talent on just one. “Switching from doing 2 chip architectures to 1 means all our silicon talent is focused on making 1 incredible chip. No-brainer in retrospect,” he wrote.

Job at Tesla chipmaking team

In a call for new talent, Musk invited engineers to join the Tesla silicon team, emphasising the critical nature of their work. He noted that they would be working on chips that “save lives” where “milliseconds matter.”Earlier this year, Tesla signed a major chip supply agreement with Samsung Electronics, reportedly valued at $16.5 billion. The deal is set to run through the end of 2033.Musk confirmed the partnership, stating that Samsung has agreed to allow “full customisation of Tesla-designed chips.” He also revealed that Samsung’s newest fabrication plant in Texas will be dedicated to producing Tesla’s next-generation A16 chipset.This contract is a significant win for Samsung, which has reportedly been facing financial struggles and stiff competition in the chip manufacturing market.

Made in India Space Chip: Vikram-32 Explained





Source link

Continue Reading

Tools & Platforms

“Our technology enables the creation of the digital leaders of the future”

Published

on


“Our cloud enables us to create the leaders of the future,” said Kevin Cochrane, Chief Marketing Officer at Vultr, at the Calcalist AI Conference in collaboration with Vultr.

Vultr provides companies with cloud infrastructure that gives them access to the computing power needed for artificial intelligence, including Nvidia graphics processors (GPUs) – the most sought-after processors in the world for training and running AI models. These processors are expensive and in short supply, making them difficult for startups, particularly early-stage companies, to acquire. Vultr’s platform allows companies to use these processors without purchasing them outright.

“We have a commitment to the entire ecosystem,” said Cochrane. “We launched our platform for developers so they can work locally but reach the whole world. We enable the creation of digital leaders, the building of a new future, and an AI infrastructure that is unparalleled, giving companies a significant advantage. Enterprises are adopting AI at a remarkable pace. All Fortune 500 companies are emphasizing AI implementation. Our research shows a huge demand for AI applications at scale. Any entrepreneur can launch new initiatives, and we provide cloud infrastructure with full support for an open ecosystem without restrictions.”

Cochrane added, “New AI models will be central to the future world, and we are here to help build it. Our cloud can manage all needs locally in Tel Aviv while distributing globally. It must be simple, accessible to every developer, and affordable for startups so that resources can go to innovation. We believe in flexible freedom of choice for selecting your ecosystem.”

“Today, all AI processors are dominated by Snowflake,” he said. “The world must be open to every developer. We offer a pricing structure that won’t break the bank, allowing money to go into building new solutions. Our prices are significantly lower than any other hyperscale cloud. As a global NVIDIA partner, we provide flexibility in choosing the GPU that best suits your performance needs.”

“A free and open ecosystem is essential,” concluded Cochrane. “We are here to make that possible. Through us, developers can experiment and find what works best for them. The journey is just beginning.”



Source link

Continue Reading

Trending