AI Insights
China urges global consensus on balancing AI development, security

China’s Premier Li Qiang warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed even as the tech race between Beijing and Washington shows no sign of abating.
His remarks came just days after US President Donald Trump unveiled an aggressive low-regulation strategy aimed at cementing US dominance in the fast-moving field, promising to “remove red tape and onerous regulation” that could hinder private sector AI development.
Opening the World AI Conference (WAIC) in Shanghai on Saturday, Li emphasised the need for governance and open-source development, announcing the establishment of a Chinese-led body for international AI cooperation.
“The risks and challenges brought by artificial intelligence have drawn widespread attention… How to find a balance between development and security urgently requires further consensus from the entire society,” the premier said.
He gave no further details about the newly announced organisation, though state media later reported “the preliminary consideration” was that it would be headquartered in Shanghai.
The organisation would “promote global governance featuring extensive consultation, joint contribution and shared benefits”, state news agency Xinhua reported, without elaborating on its set-up or mechanisms.
At a time when AI is being integrated across virtually all industries, its uses have raised major questions, including about the spread of misinformation, its impact on employment and the potential loss of technological control.
In a speech at WAIC on Saturday, Nobel Prize-winning physicist Geoffrey Hinton compared the situation to keeping “a very cute tiger cub as a pet”.
To survive, he said, you need to ensure you can train it not to kill you when it grows up.
– Pledge to share AI advances –
The enormous strides AI technology has made in recent years have seen it move to the forefront of the US-China rivalry.
Premier Li said China would “actively promote” the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones.
“If we engage in technological monopolies, controls and blockage, artificial intelligence will become the preserve of a few countries and a few enterprises,” he said.
Vice Foreign Minister Ma Zhaoxu warned against “unilateralism and protectionism” at a later meeting.
Washington has expanded its efforts in recent years to curb exports of state-of-the-art chips to China, concerned that they can be used to advance Beijing’s military systems and erode US tech dominance.
Li, in his speech, highlighted “insufficient supply of computing power and chips” as a bottleneck to AI progress.
China has made AI a pillar of its plans for technological self-reliance, with the government pledging a raft of measures to boost the sector.
In January, Chinese startup DeepSeek unveiled an AI model that performed as well as top US systems despite using less powerful chips.
– ‘Defining test’ –
In a video message played at the WAIC opening ceremony, UN Secretary-General Antonio Guterres said AI governance would be “a defining test of international cooperation”.
The ceremony saw the French president’s AI envoy, Anne Bouverot, underscore “an urgent need” for global action and for the United Nations to play a “leading role”.
Bouverot called for a framework “that is open, transparent and effective, giving each and everyone an opportunity to have their views taken into account”.
Li’s speech “posed a clear contrast to the Trump administration’s ‘America First’ view on AI” and the US measures announced this week, said WAIC attendee George Chen, a partner at Washington-based policy consultancy The Asia Group.
“The world is now clearly divided into at least three camps: the United States and its allies, China (and perhaps many Belt and Road or Global South countries), and the EU — which prefers regulating AI through legislation, like the EU AI Act,” Chen told AFP.
At an AI summit in Paris in February, 58 countries including China, France and India — as well as the European Union and African Union Commission — called for enhanced coordination on AI governance.
But the United States warned against “excessive regulation”, and alongside the United Kingdom, refused to sign the summit’s appeal for an “open”, “inclusive” and “ethical” AI.
ll-reb/sco
AI Insights
No, AI Is Not Better Than a Good Doctor

Search the internet and you will find countless testimonials of individuals using AI to get diagnoses their doctors missed. And while it is important for individuals to take ownership of their healthcare and use all available resources, it is just as important to understand the process behind an AI diagnosis.
If you ask AI to figure out what ails you based on inputting a series of symptoms, the AI will use mathematical probability to calculate the appropriate sequence of words that would generate the most valuable output given the specific prompt. The AI has no intrinsic or learned understanding of what “body,” “illness,” “pain,” or “disease” mean. Such practically meaningful concepts to humans are, to the bot, just letters encountered in the training set frequently paired with other letters.
New research on AI’s lack of medical reasoning
Recently, a team of researchers set out to investigate whether AIs that achieved near-perfect accuracy on medical benchmarks like MedQA actually reasoned through medical problems or simply exploited statistical patterns in their training data. If doctors and patients more widely rely on AI tools for diagnosis, it becomes critical to understand the capability of AI when faced with novel clinical scenarios.
The researchers took 100 questions from MedQA, a standard dataset of multiple-choice medical questions collected from professional medical board exams, and replaced the original correct answer choice with “None of the other answers.” If the AI was simply pattern-matching to its training data, the change should prove devastating to its accuracy. On the other hand, if there was reasoning behind its answers the negative effect should be minimal.
Sure enough, they found that when an AI was faced with a question that deviates from the familiar answer patterns it was trained on, there was a substantive decline in accuracy, from 80% to 42% accuracy. This is because AI today are still just probability calculators, not artful thinkers.
Artful medical practitioners see, hear, feel, and recognize medical conditions in ways they are often not consciously aware of. While an AI would be thrown off by an unfamiliar description of symptoms, good doctors listen to the specific word choices of patients and try to understand. They appreciate how societal factors can impact health, trusting both their own intuitions and those of the patient. They pay close attention to all the presenting symptoms in an open-minded manner, as opposed to algorithmically placing the patient in a generic diagnostic box.
Healing is more than a single task
And yet, algorithmic supremacists are as confident as ever in their belief that human healthcare providers will be replaced by machines. In 2016, at the Machine Learning and Market for Intelligence Conference in my hometown of Toronto, Geoffrey Hinton took the mic to confidently assert: “If you work as a radiologist, you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down … People should stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists.”
Seven years later, well past the five-year deadline, Kevin Fischer, CEO of Open Souls, attacked Hinton’s erroneous AI prediction, explaining how tech boosters home in on a single behavior against some task and then extrapolate broader implications based on that single task alone. The reality is that reducing any job, especially a wildly complex job that requires a decade of training, to a handful of tasks is absurd.
As Fischer explains, radiologists have a 3D world model of the brain and its physical dynamics in their head, which they use when interpreting the results of a scan. An AI tasked with analysis is simply performing 2D pattern recognition. Furthermore, radiologists have a host of grounded models they use to make determinations, and, when they think artfully, one of the most important is whether something “feels” off. A large part of their job is communicating their findings with fellow human physicians. Further, human radiologists need to see only a single example of a rare and obscure condition to both remember it and identify it in the future, unlike algorithms, which struggle with what to do with statistical outliers.
So, by all means, use whatever tools you can access to help your wellness. But be mindful of the difference between a medical calculator and an artful thinker.
AI Insights
Escape from Tarkov is finally coming to Steam ‘soon,’ developer says

Following news that Escape from Tarkov is escaping its perpetual beta, the pioneering extraction shooter is also about to make its debut on Steam. Nikita Buyanov, head of the Battlestate Games studio that developed Escape from Tarkov, confirmed on X that the game’s Steam page “will be available soon,” only teasing that the full details will come later.
Buyanov’s confirmation comes less than a day after the developer posted a GIF on X of a man spraying steam from an iron. Earlier this month, Buyanov revealed on X that the looter shooter will get its 1.0 release on November 15, 2025, more than eight years after the beta opened up to players in July 2017, and that the studio has plans to port it to consoles. The Steam page for Escape from Tarkov isn’t live yet, and with only vague details to go off of, longtime fans already have burning questions. Most importantly, existing players are eager to know if they will have to buy the game again on Steam and how this change will affect the ongoing cheating problem.
While we don’t have any answers yet, Battlestate Games recently went into damage control mode when it revealed the Unheard Edition of the game that costs $250 and includes a new PvE mode. This move irked longstanding players who previously purchased another premium edition of the game, called the Edge of Darkness, which promised access to all future DLCs. The controversy boiled down to owners of the Edge of Darkness edition claiming they should have access to the new content, but the studio argued that it isn’t classified as DLC. In the end, Buyanov apologized for the debacle and promised the PvE mode would be available for anyone who purchased the Edge of Darkness package.
AI Insights
Soft skills to survival skills: How to prepare for the ‘job apocalypse’ due to AI

The rise of artificial intelligence is already reshaping the global workforce, with experts warning that the ability to build skills such as judgment, empathy, adaptability and digital literacy will be essential to avoid being left behind.
As the technology evolves in waves, from automation to generative AI, agentic systems and eventually artificial general intelligence, millions risk losing their income and also their sense of purpose and identity.
Maha Hosain Aziz, professor at New York University and a member of the World Economic Forum’s Global Foresight Network, warned that the world rarely considers the broader social consequences of this disruption.
“We rarely connect the dots to what happens next – when millions lose not just income, but the anchor that work provides,” she wrote on the World Economic Forum’s platform.
“What happens when our education or years of work experience don’t matter as much any more? Many may face a grim choice: scramble to ‘learn AI’ to stay relevant – or drift into a new class, uncertain where they can fit in the AI economy.”
Ms Aziz outlined four waves of disruption, including traditional automation replacing routine jobs and generative AI transforming content creation and knowledge work.
Agentic AI is taking on multi-step tasks in areas such as HR, market research and IT, with the potential to replace midlevel managers.
By 2030, the world could see the rise of artificial general intelligence capable of most cognitive tasks.
“Each wave will displace another segment of the global working population,” Ms Aziz said.
“The challenge isn’t just how to re-employ people, but how to help them adapt to a future where their previous skills or identities may no longer be relevant. In a way, we’ve seen this before.”
She proposed two ideas: precariat labs, which are cross-sector hubs where governments, companies and civil society can test interventions for those at risk of AI-driven job loss, including retraining and mental health support.
She said there could also be a reimagined universal basic income focused on purpose, designed to restore belonging and meaning through civic projects and skill-sharing networks.
“The AI precariat may not make headlines like billion-dollar chip deals or breakthrough models,” she said.
“But it will shape the political, social, economic and security terrain of the next decade. If we want AI to be remembered as a tool for human flourishing, rather than mass alienation, we must start planning, not just for the jobs AI will create – but for the dreams it might erase.”
The UAE’s Human-Centered Approach
Nevin Lewis, chief executive at Black & Grey HR, told The National that while AI can automate many technical processes, it cannot replicate the interpersonal and cultural skills that are essential in markets such as the UAE.
“The UAE has always been a market where business is personal. Deals are not closed by contracts alone, but by trust built in meetings, majlis and boardrooms. AI can automate reporting, forecasting and approvals, but it cannot replace the human skills that build credibility in the UAE,” he said.
Mr Lewis said sales managers, client relationship leaders, hospital administrators, school principals and project directors as examples of roles that rely heavily on empathy and cultural awareness.
“These are not just ‘soft skills,’ they are survival skills in a multicultural economy,” he said.
Mr Lewis said that there should be a focus on developing employees’ AI fluency and data literacy.
“In retail, that might look like an e-commerce manager who can use AI to predict customer demand while still shaping a human-centred shopping experience. In banking, it could be a compliance officer who uses AI fraud detection alerts but still applies judgment before escalating,” he said.
“The future is not for those who only know how to code, but for those who can apply AI in business-critical ways.
“Technology can deliver the data, but it takes a leader to align teams, calm resistance, and keep people motivated in times of change.”
Invisible Job Losses
Bronwyn Williams, an economist and future trends analyst, told The National that the “job apocalypse” is already well under way, especially among entry level jobs.
“This creates a situation where economists and business leaders do not count the ‘invisible’ losses – the jobs that failed to materialise and absorb talent,” Ms Williams said, who wrote the Survive the AI Apocalypse: A guide for solutionists book, released this year.
“They are also undercounting the impact of underemployment, where people are accepting jobs below their skill sets and experience level to survive – or who are taking on multiple jobs to make ‘one’ living.”
She said many traditional salaried jobs are likely to disappear, but a new “value economy” is emerging where people are rewarded for the unique contributions they can offer.
Those who keep improving their skills and provide services that others are willing to pay for will continue to find work.
She said middle-class workers in developed countries are most at risk, while professionals in less wealthy regions could gain new opportunities.
She added that in today’s global economy, education or social status alone is not enough to protect a job, because technology allows similar work to be done elsewhere at a lower cost.
Preparing the next generation
Karuna Agarwal from Future Tense HR, based in the UAE, echoed Ms Williams thoughts and said that several categories of jobs are already under threat.
She said the responsibility to prepare for the AI age starts with education.
“Our educational systems has a focus on building key skills which is a combination of soft skills and digital skills in students who are the future professionals,” she said.
“The skills that cannot be replaced by AI are things like networking, critical thinking, agility, analytical thinking to name a few. It is very important for young professionals to update themselves in digital skills like AI and data literacy to be relevant and prepare themselves for the AI Age.”
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies