Tools & Platforms
Never mind the botlickers, ‘AI’ is just normal technology – The Mail & Guardian

Demystify: Artificial intelligence has its uses, but it is the harms that should concern us. Photo: Flickr
Most of us know at least one slopper. They’re the people who use ChatGPT to reply to Tinder matches, choose items from the restaurant menu and write creepily generic replies to office emails. Then there’s the undergraduate slopfest that’s wreaking havoc at universities, to say nothing of the barrage of suspiciously em-dash-laden papers polluting the inboxes of academic journal editors.
Not content to merely participate in the ongoing game of slop roulette, the botlicker is a more proactive creature who is usually to be found confidently holding forth like some subpar regional TED Talk speaker about how “this changes everything”. Confidence notwithstanding, in most cases Synergy Greg from marketing and his fellow botlickers are dangerously ignorant about their subject matter — contemporary machine learning technologies — and are thus prone to cycling rapidly between awe and terror.
Indeed, for the botlicker, who possibly also has strong views on crypto, “AI” is simultaneously the worst and the best thing we’ve ever invented. It’s destroying the labour market and threatening us all with techno-fascism, but it’s also delivering us to a fully automated leisure society free of what David Graeber once rightly called “bullshit jobs”.
You’ll notice that I’m using scare quotes around the term “AI”. That’s because, as computational linguist Emily Bender and former Google research scientist Alex Hanna argue in their excellent recent book, The AI Con, there is nothing inherently intelligent about these technologies, which they describe with the more accurate term “synthetic text extrusion machines”. The acronym STEM is already taken, alas, but there’s another equally apt acronym we can use: Salami, or systematic approaches to learning algorithms and machine inferences.
The image of machine learning as a ground-up pile of random bits and pieces that is later squashed into a sausage-shaped receptacle to be consumed by people who haven’t read the health warnings is probably vastly more apposite than the notion that doing some clever — and highly computationally and ecologically expensive — maths on some big datasets somehow constitutes “intelligence”.
That said, perhaps we shouldn’t be so hard on those who, when confronted with the misleading vividness of ChatGPT and Co’s language-shaped outputs, resort to imputing all sorts of cognitive properties to what Bender and Hanna, also the hosts of the essential Mystery AI Hype Theater 3000 podcast, cheekily described as “mathy maths”. After all, as sci-fi author Arthur C Clarke reminded us, “any sufficiently advanced technology is indistinguishable from magic”, and in our disenchanted age some of us are desperate for a little more magic in our lives.
Slopholm Syndrome notwithstanding, if we are to have useful conversations about machine learning then it’s crucial that instead of succumbing to the cheap parlour tricks of Silicon Valley marketing strategies — which are, tellingly, constructed around the exact-same mix of infinite promise and terrifying existential risk their pro-bono shills the botlickers always invoke — we pay attention to the men behind the curtain and expose “AI” for what it is: normal technology.
This, of course, means steering away both from hyperbolic claims about the imminent emergence of “AGI” (artificial general intelligence) that will solve all of humanity’s most pressing problems as well as from the crude Terminator-style dystopian sci-fi scenarios that populate the fever dreams of the irrational rationalists (beware, traveller, for this way lie Roko’s Basilisk and the Zizians).
More fundamentally, it also means taking a step back to examine some of the underlying social drivers that have caused such widespread apophenia (a kind of hallucination where you see patterns that aren’t there — it’s not just the “AI” that hallucinates, it’s causing us to see things too).
Most obviously in this regard, when confronted with the seemingly intractable and compounded social and ecological crises of the current moment, deferring to techno-solutionism is a reasonable strategy to ward off the inevitable existential dread of life in the Anthropocene. For many people, things are as the philosopher of technology Heidegger once said at the end of his late-life interview: “Only a God can save us.” Albeit in this case a bizarre techno-theological object built from maths, server farms full of expensive graphics cards and other people’s dubiously obtained data.
Beyond this, we should acknowledge that the increasing social, political, technological and ethical complexity of the world can leave us all scrambling for ways to stabilise our meaning-making practices. As the rug of certainty is pulled from under our feet at an ever-accelerating pace, it’s no wonder that we tend to experience an increased need for some sense of certainty, whether grounded in fascist demagoguery, phobic responses to the leakiness and fluidity of socially constructed categories or the synthetic dulcet tones of chatbots that have, here in the Eremocene (the Age of Loneliness), become our friends, partners, therapists and infallible tomes of wisdom.
From the Pythia who served as the Oracles of Delphi, allaying the fears of ancient Greeks during times of unrest, to the Python code that allows us to interface with our new oracles, the desire for existential certainty is far from new. In a time where a sense of agency and sufficient grasp on the world has been wrested from most of us, however — where our feeds are a never-ending barrage of wars, genocides and ecological collapses we feel powerless to stop — the desire for some source of stable knowledge, some all-knowing benevolent force that grants us a sense of vicarious power if we can learn to master it (just prompt better) — has possibly never been stronger.
ChatGPT, Grok, Claude, Gemini and Co, however, are not oracles. They are mathematically sophisticated games played with giant statistical databases. Recall in this regard that very few people assume any kind of intelligence, reasoning or sensory experience when using Midjourney and other early image generators that are built using the same contemporary machine learning paradigm as LLMs. We know they are just clever code.
But if we don’t regard Midjourney as some kind of sentient algorithmic overlord simply because it produces outputs that cluster pixels together in interesting ways, why would we regard LLMs as more than maths and datasets just because they produce outputs that cluster syntax together in interesting ways? Just as a picture of a bird cannot fly no matter how realistically it is drawn, so too is a picture of the language-using faculties of human beings not language and thus not reflective of anything deeper than next token prediction, hence Bender and Hanna’s delightful term “language-shaped”.
In light of the above, I’d like to suggest that we approach these novel technologies from at least two angles. On the one hand, it’s urgent that we demystify them. The more we succumb to a contemporary narcissism-fuelled variation of th Barnum effect, the less we’ll be able to reach informed decisions about regulating “AI” and the more we’ll be stochastically parroting the good-cop, bad-cop variants of Silicon Valley boosterism to further line the pockets of the billionaire tech oligarchs riding the current speculative bubble while they bankroll neofascism.
On the other hand, we should start paying less attention to the TESCREALists (transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism — you know the type) and their “AI” shock doctrine and focus more on the current real-world harms being caused by the zealous adoption of commercial “AI” products by every middle manager, university bureaucrat or confused citizen who doesn’t want to be left behind (which if left unchecked tends to lead to what critical “AI” theorist Dan MacQuillan terms algorithmic Thatcherism).
These two tasks need to be approached together. It’s no use trying to mitigate actual ethical harms — the violence caused by algorithmic bias, for instance — if we do not have at least a rudimentary grasp of what synthetic text extrusion machines do, and vice-versa. In approaching these tasks, we should also challenge the rhetoric of inevitability. No technology, whether laser discs, blockchain, VR or LLMs, necessarily ends up being adopted by society in the form intended by its most enthusiastic proselytes and the history of technology is also a history of failures and resistance.
Finally, and perhaps most importantly, we should take great care not to fall into the trap of believing that critical thought, whether at universities, in the workplace or in the halls of power, is something that can or should be algorithmically optimised. Despite the increasing neoliberalisation of these sectors, which itself encourages the logic of automation and quantifiable outputs, critical thought — real, difficult thought grounded in uncertainty, finitude and everything else that makes us human — has perhaps never been so important.
Tools & Platforms
Anthropic’s Claude restrictions put overseas AI tools backed by China in limbo

An abrupt decision by American artificial intelligence firm Anthropic to restrict service to Chinese-owned entities anywhere in the world has cast uncertainty over some Claude-dependent overseas tools backed by China’s tech giants.
After Anthropic’s notice on Friday that it would upgrade access restrictions to entities “more than 50 per cent owned … by companies headquartered in unsupported regions” such as China, regardless of where they are, Chinese users have fretted over whether they could still access the San Francisco-based firm’s industry-leading AI models.
While it remains unknown how many entities could be affected and how the restrictions would be implemented, anxiety has started to spread among some users.
Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.
Singapore-based Trae, an AI-powered code editor launched by Chinese tech giant ByteDance for overseas users, is a known user of OpenAI’s GPT and Anthropic’s Claude models. A number of users of Trae have raised the issue of refunds to Trae staff on developer platforms over concerns that their access to Claude would no longer be available.
Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP alt=Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP>
A Trae manager responded by saying that Claude was still available, urging users not to consider refunds “for the time being”. The company had just announced a premium “Max Mode” on September 2, which boasted access to significantly more powerful coding abilities “fully supported” by Anthropic’s Claude models.
Other Chinese tech giants offer Claude on their coding agents marketed to international users, including Alibaba Group Holding’s Qoder and Tencent Holdings’ CodeBuddy, which is still being beta tested. Alibaba owns the South China Morning Post.
ByteDance and Trae did not respond to requests for comment.
Amid the confusion, some Chinese AI companies have taken the opportunity to woo disgruntled users. Start-up Z.ai, formerly known as Zhipu AI, said in a statement on Friday that it was offering special offers to Claude application programming interface users to move over to its models.
Anthropic’s decision to restrict access to China-owned entities is the latest evidence of an increasingly divided AI landscape.
In China, AI applications and tools for the domestic market are almost exclusively based on local models, as the government has not approved any foreign large language model for Chinese users.
Anthropic faced pressure to take action as a number of Chinese companies have established subsidiaries in Singapore to access US technology, according to a report by The Financial Times on Friday.
Anthropic’s flagship Claude AI models are best known for their strong coding capabilities. The company’s CEO Dario Amodei has repeatedly called for stronger controls on exports of advanced US semiconductor technology to China.
Anthropic completed a US$13 billion funding round in the past week that tripled its valuation to US$183 billion. On Wednesday, the company said its software development tool Claude Code, launched in May, was generating more than US$500 million in run-rate revenue, with usage increasing more than tenfold in three months.
The firm’s latest Claude Opus 4.1 coding model achieved an industry-leading score of 74.5 per cent on SWE-bench Verified – a human-validated subset of the large language model benchmark, SWE-bench, that is supposed to more reliably evaluate AI models’ capabilities.
This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP’s Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.
Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.
Tools & Platforms
‘Please join the Tesla silicon team if you want to…’: Elon Musk offers job as he announces ‘epic’ AI chip

Elon Musk has announced a major step forward for Tesla‘s chip development, confirming a ‘great design review’ for the company’s AI5 chip. The CEO made the announcement on X, signaling Tesla’s intensified push into custom semiconductors amid a fierce global competition, and also offered job to engineers at Tesla’s silicon team.According to Musk, the AI5 chip is set to be ‘epic,’ and the upcoming AI6 has a ‘shot at being the best by AI chip by far.’“Just had a great design review today with the Tesla AI5 chip design team! This is going to be an epic chip. And AI6 to follow has a shot at being the best by AI chip by far,” Musk said in a post on X.Musk revealed that Tesla’s silicon strategy has been streamlined. The company is moving from developing two separate chip architectures to focusing all of its talent on just one. “Switching from doing 2 chip architectures to 1 means all our silicon talent is focused on making 1 incredible chip. No-brainer in retrospect,” he wrote.
Job at Tesla chipmaking team
In a call for new talent, Musk invited engineers to join the Tesla silicon team, emphasising the critical nature of their work. He noted that they would be working on chips that “save lives” where “milliseconds matter.”Earlier this year, Tesla signed a major chip supply agreement with Samsung Electronics, reportedly valued at $16.5 billion. The deal is set to run through the end of 2033.Musk confirmed the partnership, stating that Samsung has agreed to allow “full customisation of Tesla-designed chips.” He also revealed that Samsung’s newest fabrication plant in Texas will be dedicated to producing Tesla’s next-generation A16 chipset.This contract is a significant win for Samsung, which has reportedly been facing financial struggles and stiff competition in the chip manufacturing market.
Tools & Platforms
“Our technology enables the creation of the digital leaders of the future”

“Our cloud enables us to create the leaders of the future,” said Kevin Cochrane, Chief Marketing Officer at Vultr, at the Calcalist AI Conference in collaboration with Vultr.
Vultr provides companies with cloud infrastructure that gives them access to the computing power needed for artificial intelligence, including Nvidia graphics processors (GPUs) – the most sought-after processors in the world for training and running AI models. These processors are expensive and in short supply, making them difficult for startups, particularly early-stage companies, to acquire. Vultr’s platform allows companies to use these processors without purchasing them outright.
“We have a commitment to the entire ecosystem,” said Cochrane. “We launched our platform for developers so they can work locally but reach the whole world. We enable the creation of digital leaders, the building of a new future, and an AI infrastructure that is unparalleled, giving companies a significant advantage. Enterprises are adopting AI at a remarkable pace. All Fortune 500 companies are emphasizing AI implementation. Our research shows a huge demand for AI applications at scale. Any entrepreneur can launch new initiatives, and we provide cloud infrastructure with full support for an open ecosystem without restrictions.”
Cochrane added, “New AI models will be central to the future world, and we are here to help build it. Our cloud can manage all needs locally in Tel Aviv while distributing globally. It must be simple, accessible to every developer, and affordable for startups so that resources can go to innovation. We believe in flexible freedom of choice for selecting your ecosystem.”
“Today, all AI processors are dominated by Snowflake,” he said. “The world must be open to every developer. We offer a pricing structure that won’t break the bank, allowing money to go into building new solutions. Our prices are significantly lower than any other hyperscale cloud. As a global NVIDIA partner, we provide flexibility in choosing the GPU that best suits your performance needs.”
“A free and open ecosystem is essential,” concluded Cochrane. “We are here to make that possible. Through us, developers can experiment and find what works best for them. The journey is just beginning.”
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi