Connect with us

AI Insights

China urges global consensus on balancing AI development, security

Published

on


China’s Premier Li Qiang warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed even as the tech race between Beijing and Washington shows no sign of abating.

His remarks came just days after US President Donald Trump unveiled an aggressive low-regulation strategy aimed at cementing US dominance in the fast-moving field, promising to “remove red tape and onerous regulation” that could hinder private sector AI development.

Opening the World AI Conference (WAIC) in Shanghai on Saturday, Li emphasised the need for governance and open-source development, announcing the establishment of a Chinese-led body for international AI cooperation.

“The risks and challenges brought by artificial intelligence have drawn widespread attention… How to find a balance between development and security urgently requires further consensus from the entire society,” the premier said.

He gave no further details about the newly announced organisation, though state media later reported “the preliminary consideration” was that it would be headquartered in Shanghai.

The organisation would “promote global governance featuring extensive consultation, joint contribution and shared benefits”, state news agency Xinhua reported, without elaborating on its set-up or mechanisms.

At a time when AI is being integrated across virtually all industries, its uses have raised major questions, including about the spread of misinformation, its impact on employment and the potential loss of technological control.

In a speech at WAIC on Saturday, Nobel Prize-winning physicist Geoffrey Hinton compared the situation to keeping “a very cute tiger cub as a pet”.

To survive, he said, you need to ensure you can train it not to kill you when it grows up.

– Pledge to share AI advances –

The enormous strides AI technology has made in recent years have seen it move to the forefront of the US-China rivalry.

Premier Li said China would “actively promote” the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones.

“If we engage in technological monopolies, controls and blockage, artificial intelligence will become the preserve of a few countries and a few enterprises,” he said.

Vice Foreign Minister Ma Zhaoxu warned against “unilateralism and protectionism” at a later meeting.

Washington has expanded its efforts in recent years to curb exports of state-of-the-art chips to China, concerned that they can be used to advance Beijing’s military systems and erode US tech dominance.

Li, in his speech, highlighted “insufficient supply of computing power and chips” as a bottleneck to AI progress.

China has made AI a pillar of its plans for technological self-reliance, with the government pledging a raft of measures to boost the sector.

In January, Chinese startup DeepSeek unveiled an AI model that performed as well as top US systems despite using less powerful chips.

– ‘Defining test’ –

In a video message played at the WAIC opening ceremony, UN Secretary-General Antonio Guterres said AI governance would be “a defining test of international cooperation”.

The ceremony saw the French president’s AI envoy, Anne Bouverot, underscore “an urgent need” for global action and for the United Nations to play a “leading role”.

Bouverot called for a framework “that is open, transparent and effective, giving each and everyone an opportunity to have their views taken into account”.

Li’s speech “posed a clear contrast to the Trump administration’s ‘America First’ view on AI” and the US measures announced this week, said WAIC attendee George Chen, a partner at Washington-based policy consultancy The Asia Group.

“The world is now clearly divided into at least three camps: the United States and its allies, China (and perhaps many Belt and Road or Global South countries), and the EU — which prefers regulating AI through legislation, like the EU AI Act,” Chen told AFP.

At an AI summit in Paris in February, 58 countries including China, France and India — as well as the European Union and African Union Commission — called for enhanced coordination on AI governance.

But the United States warned against “excessive regulation”, and alongside the United Kingdom, refused to sign the summit’s appeal for an “open”, “inclusive” and “ethical” AI.

ll-reb/sco



Source link

AI Insights

No, AI Is Not Better Than a Good Doctor

Published

on


Search the internet and you will find countless testimonials of individuals using AI to get diagnoses their doctors missed. And while it is important for individuals to take ownership of their healthcare and use all available resources, it is just as important to understand the process behind an AI diagnosis.

If you ask AI to figure out what ails you based on inputting a series of symptoms, the AI will use mathematical probability to calculate the appropriate sequence of words that would generate the most valuable output given the specific prompt. The AI has no intrinsic or learned understanding of what “body,” “illness,” “pain,” or “disease” mean. Such practically meaningful concepts to humans are, to the bot, just letters encountered in the training set frequently paired with other letters.

New research on AI’s lack of medical reasoning

Recently, a team of researchers set out to investigate whether AIs that achieved near-perfect accuracy on medical benchmarks like MedQA actually reasoned through medical problems or simply exploited statistical patterns in their training data. If doctors and patients more widely rely on AI tools for diagnosis, it becomes critical to understand the capability of AI when faced with novel clinical scenarios.

The researchers took 100 questions from MedQA, a standard dataset of multiple-choice medical questions collected from professional medical board exams, and replaced the original correct answer choice with “None of the other answers.” If the AI was simply pattern-matching to its training data, the change should prove devastating to its accuracy. On the other hand, if there was reasoning behind its answers the negative effect should be minimal.

Sure enough, they found that when an AI was faced with a question that deviates from the familiar answer patterns it was trained on, there was a substantive decline in accuracy, from 80% to 42% accuracy. This is because AI today are still just probability calculators, not artful thinkers.

Artful medical practitioners see, hear, feel, and recognize medical conditions in ways they are often not consciously aware of. While an AI would be thrown off by an unfamiliar description of symptoms, good doctors listen to the specific word choices of patients and try to understand. They appreciate how societal factors can impact health, trusting both their own intuitions and those of the patient. They pay close attention to all the presenting symptoms in an open-minded manner, as opposed to algorithmically placing the patient in a generic diagnostic box.

Healing is more than a single task

And yet, algorithmic supremacists are as confident as ever in their belief that human healthcare providers will be replaced by machines. In 2016, at the Machine Learning and Market for Intelligence Conference in my hometown of Toronto, Geoffrey Hinton took the mic to confidently assert: “If you work as a radiologist, you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down … People should stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists.”

Seven years later, well past the five-year deadline, Kevin Fischer, CEO of Open Souls, attacked Hinton’s erroneous AI prediction, explaining how tech boosters home in on a single behavior against some task and then extrapolate broader implications based on that single task alone. The reality is that reducing any job, especially a wildly complex job that requires a decade of training, to a handful of tasks is absurd.

As Fischer explains, radiologists have a 3D world model of the brain and its physical dynamics in their head, which they use when interpreting the results of a scan. An AI tasked with analysis is simply performing 2D pattern recognition. Furthermore, radiologists have a host of grounded models they use to make determinations, and, when they think artfully, one of the most important is whether something “feels” off. A large part of their job is communicating their findings with fellow human physicians. Further, human radiologists need to see only a single example of a rare and obscure condition to both remember it and identify it in the future, unlike algorithms, which struggle with what to do with statistical outliers.

So, by all means, use whatever tools you can access to help your wellness. But be mindful of the difference between a medical calculator and an artful thinker.





Source link

Continue Reading

AI Insights

Escape from Tarkov is finally coming to Steam ‘soon,’ developer says

Published

on


Following news that Escape from Tarkov is escaping its perpetual beta, the pioneering extraction shooter is also about to make its debut on Steam. Nikita Buyanov, head of the Battlestate Games studio that developed Escape from Tarkov, confirmed on X that the game’s Steam page “will be available soon,” only teasing that the full details will come later.

Buyanov’s confirmation comes less than a day after the developer posted a GIF on X of a man spraying steam from an iron. Earlier this month, Buyanov revealed on X that the looter shooter will get its 1.0 release on November 15, 2025, more than eight years after the beta opened up to players in July 2017, and that the studio has plans to port it to consoles. The Steam page for Escape from Tarkov isn’t live yet, and with only vague details to go off of, longtime fans already have burning questions. Most importantly, existing players are eager to know if they will have to buy the game again on Steam and how this change will affect the ongoing cheating problem.

While we don’t have any answers yet, Battlestate Games recently went into damage control mode when it revealed the Unheard Edition of the game that costs $250 and includes a new PvE mode. This move irked longstanding players who previously purchased another premium edition of the game, called the Edge of Darkness, which promised access to all future DLCs. The controversy boiled down to owners of the Edge of Darkness edition claiming they should have access to the new content, but the studio argued that it isn’t classified as DLC. In the end, Buyanov apologized for the debacle and promised the PvE mode would be available for anyone who purchased the Edge of Darkness package.



Source link

Continue Reading

AI Insights

Soft skills to survival skills: How to prepare for the ‘job apocalypse’ due to AI

Published

on


The rise of artificial intelligence is already reshaping the global workforce, with experts warning that the ability to build skills such as judgment, empathy, adaptability and digital literacy will be essential to avoid being left behind.

As the technology evolves in waves, from automation to generative AI, agentic systems and eventually artificial general intelligence, millions risk losing their income and also their sense of purpose and identity.

Maha Hosain Aziz, professor at New York University and a member of the World Economic Forum’s Global Foresight Network, warned that the world rarely considers the broader social consequences of this disruption.

“We rarely connect the dots to what happens next – when millions lose not just income, but the anchor that work provides,” she wrote on the World Economic Forum’s platform.

“What happens when our education or years of work experience don’t matter as much any more? Many may face a grim choice: scramble to ‘learn AI’ to stay relevant – or drift into a new class, uncertain where they can fit in the AI economy.”

Ms Aziz outlined four waves of disruption, including traditional automation replacing routine jobs and generative AI transforming content creation and knowledge work.

Agentic AI is taking on multi-step tasks in areas such as HR, market research and IT, with the potential to replace midlevel managers.

By 2030, the world could see the rise of artificial general intelligence capable of most cognitive tasks.

“Each wave will displace another segment of the global working population,” Ms Aziz said.

“The challenge isn’t just how to re-employ people, but how to help them adapt to a future where their previous skills or identities may no longer be relevant. In a way, we’ve seen this before.”