Connect with us

AI Research

Report shows China outpacing the US and EU in AI research

Published

on


Governments now face the reality that falling behind in AI capability could have serious geopolitical consequences, warns a new research report.

AI is increasingly viewed as a strategic asset rather than a technological development, and new research suggests China is now leading the global AI race.

A report titled ‘DeepSeek and the New Geopolitics of AI: China’s ascent to research pre-eminence in AI’, authored by Daniel Hook, CEO of Digital Science, highlights how China’s AI research output has grown to surpass that of the US, the EU and the UK combined.

According to data from Dimensions, a primary global research database, China now accounts for over 40% of worldwide citation attention in AI-related studies. Instead of focusing solely on academic output, the report points to China’s dominance in AI-related patents.

In some indicators, China is outpacing the US tenfold in patent filings and company-affiliated research, signalling its capacity to convert academic work into tangible innovation.

Hook’s analysis covers AI research trends from 2000 to 2024, showing global AI publication volumes rising from just under 10,000 papers in 2000 to 60,000 in 2024.

However, China’s influence has steadily expanded since 2018, while the EU and the US have seen relative declines. The UK has largely maintained its position.

Clarivate, another analytics firm, reported similar findings, noting nearly 900,000 AI research papers produced in China in 2024, triple the figure from 2015.

Hook notes that governments increasingly view AI alongside energy or military power as a matter of national security. Instead of treating AI as a neutral technology, there is growing awareness that a lack of AI capability could have serious economic, political and social consequences.

The report suggests that understanding AI’s geopolitical implications has become essential for national policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Are AI existential risks real—and what should we do about them?

Published

on


In March 2023, the Future of Life Institute issued an open letter asking artificial intelligence (AI) labs to “pause giant AI experiments.” The animating concern was: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” Two months later, hundreds of prominent people signed onto a one-sentence statement on AI risk asserting that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

This concern about existential risk (“x-risk”) from highly capable AI systems is not new. In 2014, famed physicist Stephen Hawking, alongside leading AI researchers Max Tegmark and Stuart Russell, warned about superintelligent AI systems “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

Policymakers are inclined to dismiss these concerns as overblown and speculative. Despite a focus on AI safety in international AI conferences in 2023 and 2024, policymakers moved away from a focus on existential risks in this year’s AI Action Summit in Paris. For the time being—and in the face of increasingly limited resources—this is all to the good. Policymakers and AI researchers should devote the bulk of their time and energy to addressing more urgent AI risks.  

But it is crucial for policymakers to understand the nature of the existential threat and recognize that as we move toward generally intelligent AI systems—ones that match or surpass human intelligence—developing measures to protect human safety will become necessary. While not the pressing problem alarmists think it is, the challenges of existential risk from highly capable AI systems must eventually be faced and mitigated if AI labs want to develop generally intelligent systems and, eventually, superintelligent ones.  


How close are we to developing AI models with general intelligence? 

AI firms are not very close to developing an AI system with capabilities that could threaten us. This assertion runs against a consensus in the AI industry that we are just years away from developing powerful, transformative systems capable of a wide variety of cognitive tasks. In a recent article, New Yorker staff writer Joshua Rothman sums up this industry consensus that scaling will produce artificial general intelligence (AGI) “by 2030, or sooner.” 

The standard argument prevalent in industry circles was laid out clearly in a June 2024 essay by AI researcher Leopold Aschenbrenner. He argues that AI capabilities increase with scale—the size of training data, the number of parameters in the model, and the amount of compute used to train models. He also draws attention to increasing algorithmic efficiency. Finally, he notes that increased capacities can be “unhobbled” through various techniques such as chain of thought reasoning, reinforcement learning through human feedback, and inserting AI models into larger useful systems. 

Part of the reason for this confidence is that AI improvements seemed to exhibit exponential growth over the last few years. This past growth suggests that transformational capabilities could emerge unexpectedly and quite suddenly. This is in line with some well-known examples of the surprising effects of exponential growth. In “The Age of Spiritual Machines,” futurist Ray Kurzweil tells the story of doubling the number of grains of rice on successive chessboard squares starting with one grain. At the end of 63 doublings there are over 18 quadrillion grains of rice on the last square. The hypothetical example of filling Lake Michigan by doubling (every 18 months) the number of ounces of water added to the lakebed makes the same point. After 60 years there’s almost nothing, but by 80 years there’s 40 feet of water. In five more years, the lake is filled.  

These examples suggest to many that exponential quantitative growth in AI achievements can create imperceptible change that suddenly blossoms into transformative qualitative improvement in AI capabilities.  

But these analogies are misleading. Exponential growth in a finite system cannot go on forever, and there is no guarantee that it will continue in AI development even into the near future. One of the key developments from 2024 is the apparent recognition by industry that training time scaling has hit a wall and that further increases in data, parameters, and compute time produce diminishing returns in capability improvements. The industry apparently hopes that exponential growth in capabilities will emerge from increases in inference time compute. But so far, those improvements have been smaller than earlier gains and limited to science, math, logic, and coding—areas where reinforcement learning can produce improvements since the answers are clear and knowable in advance.  

Today’s large language models (LLMs) show no signs of the exponential improvements characteristic of 2022 and 2023. OpenAI’s GPT-5 project ran into performance troubles and had to be downgraded to GPT-4.5, representing only a “modest” improvement when it was released earlier this year. It made up answers about 37% of the time, which is an improvement over the company’s faster, less expensive GPT-4o model, released last year, which hallucinated nearly 60% of the time. But OpenAI’s latest reasoning systems hallucinate at a higher rate than the company’s previous systems.  

Many in the AI research community think AGI will not emerge from the currently dominant machine learning approach that relies on predicting the next word in a sentence. In a report issued in March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), a professional association of AI researchers established in 1979, reported that 76% of the 475 AI researchers surveyed thought that “scaling up current AI approaches” would be “unlikely” or “very unlikely” to produce general intelligence.  

These doubts about whether current machine learning paradigms are sufficient to reach general intelligence rest on widely understood limitations in current AI models that the report outlines. These limitations include difficulties in long-term planning and reasoning, generalization beyond training data, continual learning, memory and recall, causal and counterfactual reasoning, and embodiment and real-world interaction.  

These researchers think that the current machine learning paradigm has to be supplemented with other approaches. Some AI researchers such as cognitive scientist Gary Marcus think a return to symbolic reasoning systems will be needed, a view that AAAI also suggests.  

Others think the roadblock is the focus on language. In a 2023 paper, computer scientist Jacob Browning and Meta’s Chief AI Scientist Yann LeCun reject the linguistic approach to general intelligence. They argue, “A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.” They recommend approaching general intelligence through machine interaction directly with the environment—“to focus on the world being talked about, not the words themselves.”  

Philosopher Shannon Vallor also rejects the linguistic approach, arguing that general intelligence presupposes sentience and the internal structures of LLMs contain no mechanisms capable of supporting experiences, as opposed to elaborate calculations that mimic human linguistic behavior. Conscious entities at the human level, she points out, desire, suffer, love, grieve, hope, care, and doubt. But there is nothing in LLMs designed to register these experiences or others like it such as pain or pleasure or “what it is like” to taste something or remember a deceased loved one. They are lacking at the simplest level of physical sensations. They have, for instance, no pain receptors to generate the feeling of pain. Being able to talk fluently about pain is not the same as having the capacity to feel pain. The fact that pain can occasionally be experienced in humans without the triggering of pain receptors in cases like phantom limbs in no way supports the idea that a system with no pain receptors at all could nevertheless experience real excruciating pain. All LLMs can do is to talk about experiences that they are quite plainly incapable of feeling for themselves. 

In a forthcoming book chapter, DeepMind researcher David Silver and Turing Award winner Richard S. Sutton endorse this focus on real-world experience as the way forward. They argue that AI researchers will make significant progress toward developing a generally intelligent agent only with “data that is generated by the agent interacting with its environment.” The generation of these real-world “experiential” datasets that can be used for AI training is just beginning. 

A recent paper from Apple researchers suggests that today’s “reasoning” models do not really reason and that both reasoning and traditional generative AI models collapse completely when confronted with complicated versions of puzzles like Tower of Hanoi.  

LeCun probably has the best summary of the prospects for the development of general intelligence. In 2024, he remarked that it “is not going to be an event… It is going to take years, maybe decades… The history of AI is this obsession of people being overly optimistic and then realising that what they were trying to do was more difficult than they thought.”


From general intelligence to superintelligence

Philosopher Nick Bostrom defines superintelligence as a computer system “that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Once AI developers have improved the capabilities of AI models so that it makes sense to call them generally intelligent, how do developers make these systems more capable than humans? 

The key step is to instruct generally intelligent models to improve themselves. Once instructed to improve themselves, however, AI models would use their superior learning capabilities to improve themselves much faster than humans can. Soon, they would far surpass human capacities through a process of recursive self-improvement.  

AI 2027, a recent forecast that has received much attention in the AI community and beyond, relies crucially on this idea of recursive self-improvement. Its key premise is that by the end of 2025, AI agents have become “good at many things but great at helping with AI research.” Once involved in AI research, AI systems recursively improve themselves at an ever-increasing pace and are soon far more capable than humans are.  

Computer scientist I.J. Good noticed this possibility back in 1965, saying of an “ultraintelligent machine” that it “could design even better machines; There would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.” In 1993, computer scientist and science fiction writer Vernor Vinge described this possibility as a coming “technological singularity” and predicted that “Within thirty years, we will have the technological means to create superhuman intelligence.” 


What’s the problem with a superintelligent AI model? 

Generally intelligent AI models, then, might quickly become superintelligent. Why would this be a problem rather than a welcome development?  

AI models, even superintelligent ones, do not do anything unless they are told to by humans. They are tools, not autonomous beings with their own goals and purposes. Developers must build purposes and goals into them to make them function at all, and this can make it seem to users as if they have generated these purposes all by themselves. But this is an illusion. They will do what human developers and deployers tell them to do.  

So, it would seem that creating superintelligent tools that could do our bidding is all upside and without risk. When AI systems become far more capable than humans are, they will be even better at performing tasks that allow humans to flourish. 

But this benign perspective ignores a major unsolved problem in AI research—the alignment problem. Developers have to be very careful what tasks they give to a generally intelligent or superintelligent system, even if it lacks genuine free will and autonomy. If developers specify the tasks in the wrong way, things could go seriously wrong. 

Developers of narrow AI systems are already struggling with the problems of task misspecification and unwanted subgoals. When they ask a narrow system to do something, they sometimes describe the task in a way that the AI system can do what they have been told to do, but not what the developers want them to do. The example of using reinforcement learning to teach an agent to compete in a computer-based race makes the point. If the developers train the agent to accumulate as many game points as possible, they might think they have programmed the system to win the race, which is the apparent objective of the game. It turns out the agent learned instead to accumulate the points without winning the race by going in circles instead of rushing to the end as fast as possible. 

Another example illustrates that AI models can use strategic deception to achieve a goal in ways that researchers did not anticipate. Researchers instructed GPT-4 to log onto a system protected by a CAPTCHA test by hiring a human to do it, without giving it any guidance on how to do this. The AI model accomplished the task by pretending to be a human with vision impairment and tricking a TaskRabbit worker into signing on for it. The researchers did not want the model to lie, but it learned to do this in order to complete the task it was assigned.  

Anthropic’s recent system card for its Sonnet 4 and Opus 4 AI models reveals further misalignment issues, where the model sometimes threatened to reveal a researcher’s extramarital affair if he shut down the system before it had completed its assigned tasks.  

Because these are narrow systems, dangerous outcomes are limited to particular domains if developers fail to resolve alignment problems. Even when the consequences are dire, they are limited in scope.  

The situation is vastly different for generally intelligent and superintelligent systems. This is the point of the well-known paper clip problem described in philosopher Nick Bostrom’s 2014 book, “Superintelligence.” Suppose the goal given to a superintelligent AI model is to produce paper clips. What could go wrong? The result, as described by professor Joshua Gans, is that the model will appropriate resources from all other activities and soon the world will be inundated with paper clips. But it gets worse. People would want to stop this AI, but it is single-minded and would realize that this would subvert its goal. Consequently, the AI would become focused on its own survival. It starts off competing with humans for resources, but now it will want to fight humans because they are a threat. This AI is much smarter than humans, so it is likely to win that battle. 

Yoshua Bengio echoes this crucial concern about dangerous subgoals. Once developers set goals and rewards, a generally intelligent system would “figure out how to achieve these given goals and rewards, which amounts to forming its own subgoals.” The “ability to understand and control its environment” is one such dangerous instrumental goal, while the subgoal of survival creates “the most dangerous scenario.”

Until some progress is made in addressing misalignment problems, developing generally intelligent or superintelligent systems seems to be extremely risky. The good news is that the potential for developing general intelligence and superintelligence in AI models seems remote. While the possibility of recursive self-improvement leading to superintelligence reflects the hope of many frontier AI companies, there is not a shred of evidence that today’s glitchy AI agents are close to conducting AI research even at the level of a normal human technician. This means there is still plenty of time to address the problem of aligning superintelligence with values that make it safe for humans. 

It is not today’s most urgent AI research priority. As AI researcher Andrew Ng is reputed to have said back in 2015, worrying about existential risk might appear to be like worrying about the problem of human overpopulation of Mars.  

Nevertheless, the general problem of AI model misalignment is real and the object of important research that can and should continue. This more mundane work of seeking to mitigate today’s risks of model misalignment might provide valuable clues to dealing with the more distant existential risks that could arise someday in the future as researchers continue down the path of developing highly capable AI systems with the potential to surpass current human limitations.   

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).



Source link

Continue Reading

AI Research

The forgotten 80-year-old machine that shaped the internet – and could help us survive AI

Published

on


Many years ago, long before the internet or artificial intelligence, an American engineer called Vannevar Bush was trying to solve a problem. He could see how difficult it had become for professionals to research anything, and saw the potential for a better way.

This was in the 1940s, when anyone looking for articles, books or other scientific records had to go to a library and search through an index. This meant drawers upon drawers filled with index cards, typically sorted by author, title or subject.

When you had found what you were looking for, creating copies or excerpts was a tedious, manual task. You would have to be very organised in keeping your own records. And woe betide anyone who was working across more than one discipline. Since every book could physically only be in one place, they all had to be filed solely under a primary subject. So an article on cave art couldn’t be in both art and archaeology, and researchers would often waste extra time trying to find the right location.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


This had always been a challenge, but an explosion in research publications in that era had made it far worse than before. As Bush wrote in an influential essay, As We May Think, in The Atlantic in July 1945:

There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialisation extends. The investigator is staggered by the findings and conclusions of thousands of other workers – conclusions which he cannot find time to grasp, much less to remember, as they appear.

Bush was dean of the school of engineering at MIT (the Massachusetts Institute of Technology) and president of the Carnegie Institute. During the second world war, he had been the director of the Office of Scientific Research and Development, coordinating the activities of some 6,000 scientists working relentlessly to give their country a technological advantage. He could see that science was being drastically slowed down by the research process, and proposed a solution that he called the “memex”.

The memex was to be a personal device built into a desk that required little physical space. It would rely heavily on microfilm for data storage, a new technology at the time. The memex would use this to store large numbers of documents in a greatly compressed format that could be projected onto translucent screens.

Most importantly, Bush’s memex was to include a form of associative indexing for tying two items together. The user would be able to use a keyboard to click on a code number alongside a document to jump to an associated document or view them simultaneously – without needing to sift through an index.

Bush acknowledged in his essay that this kind of keyboard click-through wasn’t yet technologically feasible. Yet he believed it would be soon, pointing to existing systems for handling data such as punched cards as potential forerunners.

Woman operating a punched card machine

Punched cards were an early way of storing digital information.
Wikimedia, CC BY-SA

He envisaged that a user would create the connections between items as they developed their personal research library, creating chains of microfilm frames in which the same document or extract could be part of multiple trails at the same time.

New additions could be inserted either by photographing them on to microfilm or by purchasing a microfilm of an existing document. Indeed, a user would be able to augment their memex with vast reference texts. “New forms of encyclopedias will appear,” said Bush, “ready-made with a mesh of associative trails running through them, ready to be dropped into the memex”. Fascinatingly, this isn’t far from today’s Wikipedia.

Where it led

Bush thought the memex would help researchers to think in a more natural, associative way that would be reflected in their records. He is thought to have inspired the American inventors Ted Nelson and Douglas Engelbart, who in the 1960s independently developed hypertext systems, in which documents contained hyperlinks that could directly access other documents. These became the foundation of the world wide web as we know it.

Beyond the practicalities of having easy access to so much information, Bush believed that the added value in the memex lay in making it easier for users to manipulate ideas and spark new ones. His essay drew a distinction between repetitive and creative thought, and foresaw that there would soon be new “powerful mechanical aids” to help with the repetitive variety.

He was perhaps mostly thinking about mathematics, but he left the door open to other thought processes. And 80 years later, with AI in our pockets, we’re automating far more thinking than was ever possible with a calculator.

If this sounds like a happy ending, Bush did not sound overly optimistic when he revisited his own vision in his 1970 book Pieces of the Action. In the intervening 25 years, he had witnessed technological advances in areas like computing that were bringing the memex closer to reality.

Yet Bush felt that the technology had largely missed the philosophical intent of his vision – to enhance human reasoning and creativity:

In 1945, I dreamed of machines that would think with us. Now, I see machines that think for us – or worse, control us.

Bush would die just four years later at the age of 84, but these concerns still feel strikingly relevant today. While it’s great that we do not need to search for a book by flipping through index cards in chests of drawers, we might feel more uneasy about machines doing most of the thinking for us.

A phone screen with AI apps

Just 80 years after Bush proposed the Memex, AIs on smartphones are an everyday thing.
jackpress

Is this technology enhancing and sharpening our skills, or is it making us lazy? No doubt everyone is different, but the danger is that whatever skills we leave to the machines, we eventually lose, and younger generations may not even get the opportunity to learn them in the first place.

The lesson from As We May Think is that a purely technical solution like the memex is not enough. Technology still needs to be human-centred, underpinned by a philosophical vision. As we contemplate a great automation in human thinking in the years ahead, the challenge is to somehow protect our creativity and reasoning at the same time.



Source link

Continue Reading

AI Research

China’s Moonshot AI releases open-source model to reclaim market position

Published

on


BEIJING (Reuters) -Chinese artificial intelligence startup Moonshot AI released a new open-source AI model on Friday, joining a wave of similar releases from local rivals, as it seeks to reclaim its position in the competitive domestic market.

The model, called Kimi K2, features enhanced coding capabilities and excels at general agent tasks and tool integration, allowing it to break down complex tasks more effectively, the company said in a statement.

Moonshot claimed the model outperforms mainstream open-source models in some areas, including DeepSeek’s V3, and rival capabilities of leading U.S. models such as those from Anthropic in certain functions such as coding.

The release follows a trend among Chinese companies toward open-sourcing AI models, contrasting with many U.S. tech giants like OpenAI and Google that keep their most advanced AI models proprietary. Some American firms, including Meta Platforms, have also released open-source models.

Open-sourcing allows developers to showcase their technological capabilities and expand developer communities as well as their global influence, a strategy likely to help China counter U.S. efforts to limit Beijing’s tech progress.

Other Chinese companies that have released open-source models include DeepSeek, Alibaba, Tencent and Baidu.

Founded in 2023 by Tsinghua University graduate Yang Zhilin, Moonshot is among China’s prominent AI startups and is backed by internet giants including Alibaba.

The company gained prominence in 2024 when users flocked to its platform for its long-text analysis capabilities and AI search functions.

However, its standing has declined this year following DeepSeek’s release of low-cost models, including the R1 model launched in January that disrupted the global AI industry.

Moonshot’s Kimi application ranked third in monthly active users last August but dropped to seventh place by June, according to aicpb.com, a Chinese website that tracks AI products.

(Reporting by Liam Mo and Brenda Goh, Editing by Louise Heavens)



Source link

Continue Reading

Trending