Connect with us

Tools & Platforms

Chinese Tech Giant Leverages Open-Source AI to Boost Overseas Cloud Growth

Published

on


TLDRs;

  • Tencent accelerates global cloud expansion, leveraging open-source AI models to meet rising overseas demand for localized AI solutions.
  • The company’s domestic cloud success provides scale and expertise for international growth, following years of rapid market share gains in China.
  • Competition remains fierce, with Tencent facing Alibaba, AWS, Microsoft, and new AI startups, but it banks on AI specialization.
  • Despite trading below its all-time high, Tencent’s diversified businesses and strong earnings forecasts are fueling investor confidence.

Tencent Holdings, one of China’s leading technology giants, is intensifying its push into global cloud computing by leaning heavily on its artificial intelligence capabilities.

According to Dowson Tong, head of Tencent’s cloud and smart industries group, the company’s international cloud business has become the fastest-growing segment of its operations, posting double-digit growth in recent years.

The strategy centers on exporting Chinese open-source AI models to overseas clients, where demand for localized solutions is rising. By providing infrastructure that allows markets to host large language models (LLMs) domestically, Tencent is positioning itself as a flexible and adaptive provider in a sector increasingly defined by AI-driven applications.

Building on Strong Domestic Foundation

Tencent’s international ambitions are rooted in a robust record of growth at home. The company has long been a dominant player in China’s cloud market, achieving rapid gains throughout the late 2010s.

For instance, in Q2 2019, Tencent’s cloud services achieved an impressive 88% year-on-year growth, securing its spot as China’s second-largest cloud provider, just behind Alibaba.

This domestic success has given Tencent both the technical expertise and the operational scale needed to compete on the global stage. China’s wider cloud industry, which grew 58% year-on-year to $2.3 billion in Q2 2019, provided Tencent with the ideal testing ground to refine its services before scaling internationally.

Competing in a Crowded Market

Tencent’s ambitions abroad come with steep competition. The company faces rivals not only from Chinese peers like Alibaba, Baidu, and ByteDance but also from global giants such as Amazon Web Services and Microsoft Azure. Emerging startups like DeepSeek are also entering the fray, adding new pressure to an already crowded market.



However, Tencent believes its edge lies in specialization. By exporting open-source AI models developed through years of experience in China’s highly competitive AI landscape, Tencent aims to offer something different from Western providers.

This positioning as an AI-first cloud provider may help the company bypass some of the infrastructure advantages enjoyed by its longer-established global rivals.

Investor Optimism Despite Valuation Discount

Beyond its operational strategy, Tencent’s market performance is drawing attention. In 2024, the company added more than US$150 billion in market value, though its share price remains 26% below its all-time high.

Analysts note that Tencent trades at a lower forward price-to-earnings ratio (17.6x) compared to global peers like Meta (22x) and Nintendo (40x).

Meanwhile, investors remain optimistic, especially as Tencent diversifies revenue streams. Upcoming earnings reports are expected to show an 11% rise in revenue, while new gaming titles such as Valorant Mobile and Delta Force could fuel growth in 2026. C



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Babylon Bee 1, California 0: Court Strikes Down Law Regulating Election-Related AI Content | American Enterprise Institute

Published

on


By reducing traditional barriers of content creation, the AI revolution holds the potential to unleash an explosion in creative expression. It also increases the societal risks associated with the spread of misinformation. This tension is the subject of a recent landmark judicial decision, Babylon Bee v Bonta (hat tip to Ajit Pai, whose social media account remains an outstanding follow). The eponymous satirical news site and others challenged California’s AB 2839, which prohibited the dissemination of “materially deceptive” AI-generated audio or video content related to elections. Although the court recognized that the case presented a novel question about “synthetically edited or digitally altered” content, it struck down the law, concluding that the rise of AI does not justify a departure from long-standing First Amendment principles.

AB 2839 was California’s attempt to regulate the use of AI and other digital tools in election-related media. The law defined “materially deceptive” content as audio or visual material that has been intentionally created or altered so that a reasonable viewer would believe it to be an authentic recording. It applied specifically to depictions of candidates, elected officials, election officials, and even voting machines or ballots, where the altered content was “reasonably likely” to harm a candidate’s electoral prospects or undermine public confidence in an election. While the statute carved out exceptions for candidates making deepfakes of themselves and for satire or parody, those exceptions required prominent disclaimers stating that the content had been manipulated.

Via Twenty20.

The court recognized that the electoral context raises the stakes for both parties. Because the law regulated speech on the basis of content, the court applied strict scrutiny: The law is constitutional only if it serves a compelling governmental interest and is the least restrictive means of protecting that interest. On the one hand, the court recognized that the state has a compelling interest in preserving the integrity of its election process. California noted how AI-generated robocalls purporting to be from President Biden encouraged New Hampshire voters not to go to the polls during the 2024 primary. But on the other hand, the Supreme Court has recognized that political speech occupies the “highest rung” of First Amendment protection. That tension is the opinion’s throughline: While elections justify significant regulation, they also demand the most protection for individual speech.

But it ultimately held that California struck the wrong balance. The state argued that the bill was a logical extension of traditional harmful speech regulations such as defamation or fraud. But the court ruled that the law reached much further. It did not limit liability to instances of actual harm, but to any content “reasonably likely” to cause material harm. And importantly, it did not limit recovery to actual candidate victims, but instead allowed any recipient of allegedly deceptive content to sue for damages. This private right of action deputized roving “censorship czars” across the state whose malicious or politically motivated suits risk chilling a broad swath of political expression.

Given this breadth, the court found the law’s safe harbor was insufficient. The law exempted satirical content (such as that produced by the Bee) as long as it carried a disclaimer that the content was digitally manipulated, in accordance with the act’s formatting requirements. But the court found that this compelled disclosure was itself unconstitutional, as it drowned out the plaintiff’s message: “Put simply, a mandatory disclaimer for parody or satire would kill the joke.” This was especially true in contexts such as mobile devices, where the formatting requirements meant the disclaimer would take up the entire screen—a problem that I have discussed elsewhere in the context of Federal Trade Commission disclaimer rules.

Perhaps most importantly, the court recognized the importance of counter-speech and market solutions as alternative remedies to disinformation. It credited crowd-sourced fact-checking such as X’s community notes, and AI tools such as Grok, as scalable solutions already being adopted in the marketplace. And it noted that California could fund AI educational campaigns to raise awareness of the issue or form committees to combat disinformation via counter-speech.

The court’s emphasis on private, speech-based solutions points the way forward for other policymakers wrestling with deepfakes and other AI-generated disinformation concerns. Private, market-driven solutions offer a more constitutionally sound path than empowering the state to police truth and risk chilling protected expression. The AI revolution is likely to disrupt traditional patterns of content creation and dissemination in society. But fundamental First Amendment principles are sufficiently flexible to adapt to these changes, just as they have when presented with earlier disruptive technologies. When presented with problematic speech, the first line of defense is more speech—not censorship. Efforts at the state or federal level to regulate AI-generated content should respect this principle if they are to survive judicial scrutiny.



Source link

Continue Reading

Tools & Platforms

AI And Creativity: Hero, Villain – Or Something Far More Nuanced? – New Technology

Published

on


As part of SXSW’s first tever UK edition, Lewis Silkin
brought together a packed room to hear five esharp minds –
photographer-advocate Isabelle Doran, tech founder Guy Gadney,
licensing entrepreneur Benjamin Woollams, Commercial partners Laura
Harper and Phil Hughes – wrestle with one deceptively simple
question: is AI a hero or a villain in the creative world?

Spoiler: it’s neither. Over sixty fast-paced minutes, the
panel dug into the real-world impact of generative models, the gaps
in current law and the uneasy economics facing everyone from
freelancers to broadcasters. We’ve distilled the conversation
into six take-aways that matter to anyone who creates, commissions
or monetises content.

1. Generative AI is already taking work – fast

“Generative AI is competing with creators in their
place of work,
” warned Isabelle Doran, citing her
Association of Photographers’ latest survey. In September 2024,
30% of respondents had lost a commission to AI; five months later
that figure ehit 58%. The fallout runs wider than photographers.
When a shoot is cancelled, stylists, assistants and post-production
teams stand idle too – a ripple effect the panel believes
that policy-makers ignore at their peril.

2. Yet the tech also unlocks new forms of storytelling

Guy Gadney was quick to balance the gloom: “It’s a
proper tsunami in the sense of the breadth and volume that’s
changing,”
he said, “but it also lets us ask
what stories we can tell now that we couldn’t
before.
” His company, Charismatic AI, is building tools
that let writers craft interactive narratives at a speed and scale
unheard of two years ago. The opportunity, he argued, lies in
marrying that capability with fair economic models rather than
trying to “block the tide“.

3. The law isn’t a free-for-all – but it is
fragmenting

Laura Harper cut through the noise: “The status quo at
the moment is uncertain and it depends on what country you’re
operating in.”
In the UK, copyright can subsist in
computer-generated works; in the US, it can’t. EU rules require
commercial text-and-data miners to respect opt-outs; UK law
doesn’t – yet. Add pergent notions of “fair
use” and you get a regulatory patchwork that leaves creators
guessing and investors hesitating.

4. Transparency is the missing link

Phil Hughes nailed the practical blocker: “We can’t
build sensible licensing schemes until we know what data went into
each model.”
Without a statutory duty to disclose
training sets, claims for compensation – or even informed
consent – stall. Isabelle Doran backed him up, pointing to
Baroness Kidron’s amendment that would force openness via the
UK’s Data Act. The Lords have now sent that proposal back to
the Commons five times; every extra week, more unlicensed works are
scraped.

5. Collective licensing could spread the load

Inpidual artists can’t negotiate with OpenAI on equal terms,
but Benjamin Woollams sees hope in a pooled approach. “Any
sort of compensation is probably where we should start,”

he said, arguing for collective rights management to mirror how
music collecting societies handle radio play. At True Rights
he’s developing pricing tools to help influencers understand
usage clauses before they sign them – a practical step
towards standardisation in a market famous for anything but.

6. Personality rights may be the next frontier

Copyright guards expression; it doesn’t stop a model cloning
your voice, gait or mannerisms. “We need to strengthen
personality rights,”
Isabelle Doran urged, echoing calls
from SAG-AFTRA and beyond. Think passing off on steroids: a legal
shield for the look, sound and biometric data that make a performer
unique. Laura Harper agreed – but reminded us that
recognition is only half the battle. Enforcement mechanisms,
cross-border by default, must follow fast.

Where does that leave us?

AI is not marching creators to the cliff edge, but it is forcing
a reckoning. The panel’s consensus was clear:

  • We can’t uninvent generative tools – nor should
    we.

  • Creators deserve both transparency and a cut of the value
    chain.

  • Government must move quickly, or the UK risks watching
    leverage, investment and talent drift overseas

As Phil Hughes put it in closing:

“We all know artificial intelligence has unlocked
extraordinary possibilities across the creative industries. The
question is whether we’re bold enough and organised enough to
make sure those possibilities pay off for the people whose
imagination feeds the machine.”

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading

Tools & Platforms

Tampere University GPT-Lab launches AI project with City of Pori

Published

on


GPT-Lab, part of Tampere University’s Faculty of Information Technology and Communication Sciences in Finland, has begun collaborating with the City of Pori Unemployment Services on the Generative Artificial Intelligence in Business Support (GENT) project.

The initiative will test how AI-driven automation can improve efficiency and reliability in public sector services.

According to a LinkedIn post from GPT-Lab, the kickoff meeting on 3 September brought together project researchers and city representatives to align objectives and set the project roadmap. The work will focus on automating routine inquiries and case handling to reduce the workload of staff, speed up responses to citizens, and free time for tasks that require human expertise.

The project is designed to improve the efficiency, accessibility, and reliability of unemployment services. It will also provide a framework for the responsible use of AI in the public sector.

The GENT project, funded by the Satakuntaliitto Regional Council of Satakunta and led by Tampere University, runs until September 2026. Its broader aim is to bring generative AI expertise to companies and organizations in the Satakunta region. Researchers will work directly with businesses to co-create AI-assisted experiments that enhance productivity, investment efficiency, and competitiveness.

Solutions and materials developed through these experiments will be shared with all companies in the region and can be adapted to individual needs. The project also highlights cooperation between SMEs, public services, and research institutions in Finland.

The ETIH Innovation Awards 2026



Source link

Continue Reading

Trending