Connect with us

AI Research

Are AI existential risks real—and what should we do about them?

Published

on


In March 2023, the Future of Life Institute issued an open letter asking artificial intelligence (AI) labs to “pause giant AI experiments.” The animating concern was: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” Two months later, hundreds of prominent people signed onto a one-sentence statement on AI risk asserting that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

This concern about existential risk (“x-risk”) from highly capable AI systems is not new. In 2014, famed physicist Stephen Hawking, alongside leading AI researchers Max Tegmark and Stuart Russell, warned about superintelligent AI systems “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

Policymakers are inclined to dismiss these concerns as overblown and speculative. Despite a focus on AI safety in international AI conferences in 2023 and 2024, policymakers moved away from a focus on existential risks in this year’s AI Action Summit in Paris. For the time being—and in the face of increasingly limited resources—this is all to the good. Policymakers and AI researchers should devote the bulk of their time and energy to addressing more urgent AI risks.  

But it is crucial for policymakers to understand the nature of the existential threat and recognize that as we move toward generally intelligent AI systems—ones that match or surpass human intelligence—developing measures to protect human safety will become necessary. While not the pressing problem alarmists think it is, the challenges of existential risk from highly capable AI systems must eventually be faced and mitigated if AI labs want to develop generally intelligent systems and, eventually, superintelligent ones.  


How close are we to developing AI models with general intelligence? 

AI firms are not very close to developing an AI system with capabilities that could threaten us. This assertion runs against a consensus in the AI industry that we are just years away from developing powerful, transformative systems capable of a wide variety of cognitive tasks. In a recent article, New Yorker staff writer Joshua Rothman sums up this industry consensus that scaling will produce artificial general intelligence (AGI) “by 2030, or sooner.” 

The standard argument prevalent in industry circles was laid out clearly in a June 2024 essay by AI researcher Leopold Aschenbrenner. He argues that AI capabilities increase with scale—the size of training data, the number of parameters in the model, and the amount of compute used to train models. He also draws attention to increasing algorithmic efficiency. Finally, he notes that increased capacities can be “unhobbled” through various techniques such as chain of thought reasoning, reinforcement learning through human feedback, and inserting AI models into larger useful systems. 

Part of the reason for this confidence is that AI improvements seemed to exhibit exponential growth over the last few years. This past growth suggests that transformational capabilities could emerge unexpectedly and quite suddenly. This is in line with some well-known examples of the surprising effects of exponential growth. In “The Age of Spiritual Machines,” futurist Ray Kurzweil tells the story of doubling the number of grains of rice on successive chessboard squares starting with one grain. At the end of 63 doublings there are over 18 quadrillion grains of rice on the last square. The hypothetical example of filling Lake Michigan by doubling (every 18 months) the number of ounces of water added to the lakebed makes the same point. After 60 years there’s almost nothing, but by 80 years there’s 40 feet of water. In five more years, the lake is filled.  

These examples suggest to many that exponential quantitative growth in AI achievements can create imperceptible change that suddenly blossoms into transformative qualitative improvement in AI capabilities.  

But these analogies are misleading. Exponential growth in a finite system cannot go on forever, and there is no guarantee that it will continue in AI development even into the near future. One of the key developments from 2024 is the apparent recognition by industry that training time scaling has hit a wall and that further increases in data, parameters, and compute time produce diminishing returns in capability improvements. The industry apparently hopes that exponential growth in capabilities will emerge from increases in inference time compute. But so far, those improvements have been smaller than earlier gains and limited to science, math, logic, and coding—areas where reinforcement learning can produce improvements since the answers are clear and knowable in advance.  

Today’s large language models (LLMs) show no signs of the exponential improvements characteristic of 2022 and 2023. OpenAI’s GPT-5 project ran into performance troubles and had to be downgraded to GPT-4.5, representing only a “modest” improvement when it was released earlier this year. It made up answers about 37% of the time, which is an improvement over the company’s faster, less expensive GPT-4o model, released last year, which hallucinated nearly 60% of the time. But OpenAI’s latest reasoning systems hallucinate at a higher rate than the company’s previous systems.  

Many in the AI research community think AGI will not emerge from the currently dominant machine learning approach that relies on predicting the next word in a sentence. In a report issued in March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), a professional association of AI researchers established in 1979, reported that 76% of the 475 AI researchers surveyed thought that “scaling up current AI approaches” would be “unlikely” or “very unlikely” to produce general intelligence.  

These doubts about whether current machine learning paradigms are sufficient to reach general intelligence rest on widely understood limitations in current AI models that the report outlines. These limitations include difficulties in long-term planning and reasoning, generalization beyond training data, continual learning, memory and recall, causal and counterfactual reasoning, and embodiment and real-world interaction.  

These researchers think that the current machine learning paradigm has to be supplemented with other approaches. Some AI researchers such as cognitive scientist Gary Marcus think a return to symbolic reasoning systems will be needed, a view that AAAI also suggests.  

Others think the roadblock is the focus on language. In a 2023 paper, computer scientist Jacob Browning and Meta’s Chief AI Scientist Yann LeCun reject the linguistic approach to general intelligence. They argue, “A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.” They recommend approaching general intelligence through machine interaction directly with the environment—“to focus on the world being talked about, not the words themselves.”  

Philosopher Shannon Vallor also rejects the linguistic approach, arguing that general intelligence presupposes sentience and the internal structures of LLMs contain no mechanisms capable of supporting experiences, as opposed to elaborate calculations that mimic human linguistic behavior. Conscious entities at the human level, she points out, desire, suffer, love, grieve, hope, care, and doubt. But there is nothing in LLMs designed to register these experiences or others like it such as pain or pleasure or “what it is like” to taste something or remember a deceased loved one. They are lacking at the simplest level of physical sensations. They have, for instance, no pain receptors to generate the feeling of pain. Being able to talk fluently about pain is not the same as having the capacity to feel pain. The fact that pain can occasionally be experienced in humans without the triggering of pain receptors in cases like phantom limbs in no way supports the idea that a system with no pain receptors at all could nevertheless experience real excruciating pain. All LLMs can do is to talk about experiences that they are quite plainly incapable of feeling for themselves. 

In a forthcoming book chapter, DeepMind researcher David Silver and Turing Award winner Richard S. Sutton endorse this focus on real-world experience as the way forward. They argue that AI researchers will make significant progress toward developing a generally intelligent agent only with “data that is generated by the agent interacting with its environment.” The generation of these real-world “experiential” datasets that can be used for AI training is just beginning. 

A recent paper from Apple researchers suggests that today’s “reasoning” models do not really reason and that both reasoning and traditional generative AI models collapse completely when confronted with complicated versions of puzzles like Tower of Hanoi.  

LeCun probably has the best summary of the prospects for the development of general intelligence. In 2024, he remarked that it “is not going to be an event… It is going to take years, maybe decades… The history of AI is this obsession of people being overly optimistic and then realising that what they were trying to do was more difficult than they thought.”


From general intelligence to superintelligence

Philosopher Nick Bostrom defines superintelligence as a computer system “that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Once AI developers have improved the capabilities of AI models so that it makes sense to call them generally intelligent, how do developers make these systems more capable than humans? 

The key step is to instruct generally intelligent models to improve themselves. Once instructed to improve themselves, however, AI models would use their superior learning capabilities to improve themselves much faster than humans can. Soon, they would far surpass human capacities through a process of recursive self-improvement.  

AI 2027, a recent forecast that has received much attention in the AI community and beyond, relies crucially on this idea of recursive self-improvement. Its key premise is that by the end of 2025, AI agents have become “good at many things but great at helping with AI research.” Once involved in AI research, AI systems recursively improve themselves at an ever-increasing pace and are soon far more capable than humans are.  

Computer scientist I.J. Good noticed this possibility back in 1965, saying of an “ultraintelligent machine” that it “could design even better machines; There would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.” In 1993, computer scientist and science fiction writer Vernor Vinge described this possibility as a coming “technological singularity” and predicted that “Within thirty years, we will have the technological means to create superhuman intelligence.” 


What’s the problem with a superintelligent AI model? 

Generally intelligent AI models, then, might quickly become superintelligent. Why would this be a problem rather than a welcome development?  

AI models, even superintelligent ones, do not do anything unless they are told to by humans. They are tools, not autonomous beings with their own goals and purposes. Developers must build purposes and goals into them to make them function at all, and this can make it seem to users as if they have generated these purposes all by themselves. But this is an illusion. They will do what human developers and deployers tell them to do.  

So, it would seem that creating superintelligent tools that could do our bidding is all upside and without risk. When AI systems become far more capable than humans are, they will be even better at performing tasks that allow humans to flourish. 

But this benign perspective ignores a major unsolved problem in AI research—the alignment problem. Developers have to be very careful what tasks they give to a generally intelligent or superintelligent system, even if it lacks genuine free will and autonomy. If developers specify the tasks in the wrong way, things could go seriously wrong. 

Developers of narrow AI systems are already struggling with the problems of task misspecification and unwanted subgoals. When they ask a narrow system to do something, they sometimes describe the task in a way that the AI system can do what they have been told to do, but not what the developers want them to do. The example of using reinforcement learning to teach an agent to compete in a computer-based race makes the point. If the developers train the agent to accumulate as many game points as possible, they might think they have programmed the system to win the race, which is the apparent objective of the game. It turns out the agent learned instead to accumulate the points without winning the race by going in circles instead of rushing to the end as fast as possible. 

Another example illustrates that AI models can use strategic deception to achieve a goal in ways that researchers did not anticipate. Researchers instructed GPT-4 to log onto a system protected by a CAPTCHA test by hiring a human to do it, without giving it any guidance on how to do this. The AI model accomplished the task by pretending to be a human with vision impairment and tricking a TaskRabbit worker into signing on for it. The researchers did not want the model to lie, but it learned to do this in order to complete the task it was assigned.  

Anthropic’s recent system card for its Sonnet 4 and Opus 4 AI models reveals further misalignment issues, where the model sometimes threatened to reveal a researcher’s extramarital affair if he shut down the system before it had completed its assigned tasks.  

Because these are narrow systems, dangerous outcomes are limited to particular domains if developers fail to resolve alignment problems. Even when the consequences are dire, they are limited in scope.  

The situation is vastly different for generally intelligent and superintelligent systems. This is the point of the well-known paper clip problem described in philosopher Nick Bostrom’s 2014 book, “Superintelligence.” Suppose the goal given to a superintelligent AI model is to produce paper clips. What could go wrong? The result, as described by professor Joshua Gans, is that the model will appropriate resources from all other activities and soon the world will be inundated with paper clips. But it gets worse. People would want to stop this AI, but it is single-minded and would realize that this would subvert its goal. Consequently, the AI would become focused on its own survival. It starts off competing with humans for resources, but now it will want to fight humans because they are a threat. This AI is much smarter than humans, so it is likely to win that battle. 

Yoshua Bengio echoes this crucial concern about dangerous subgoals. Once developers set goals and rewards, a generally intelligent system would “figure out how to achieve these given goals and rewards, which amounts to forming its own subgoals.” The “ability to understand and control its environment” is one such dangerous instrumental goal, while the subgoal of survival creates “the most dangerous scenario.”

Until some progress is made in addressing misalignment problems, developing generally intelligent or superintelligent systems seems to be extremely risky. The good news is that the potential for developing general intelligence and superintelligence in AI models seems remote. While the possibility of recursive self-improvement leading to superintelligence reflects the hope of many frontier AI companies, there is not a shred of evidence that today’s glitchy AI agents are close to conducting AI research even at the level of a normal human technician. This means there is still plenty of time to address the problem of aligning superintelligence with values that make it safe for humans. 

It is not today’s most urgent AI research priority. As AI researcher Andrew Ng is reputed to have said back in 2015, worrying about existential risk might appear to be like worrying about the problem of human overpopulation of Mars.  

Nevertheless, the general problem of AI model misalignment is real and the object of important research that can and should continue. This more mundane work of seeking to mitigate today’s risks of model misalignment might provide valuable clues to dealing with the more distant existential risks that could arise someday in the future as researchers continue down the path of developing highly capable AI systems with the potential to surpass current human limitations.   

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).



Source link

AI Research

Chinese Startup DeepSeek Challenges Silicon Valley AI Dominance with Research Focus

Published

on

By


In the rapidly evolving world of artificial intelligence, Chinese startup DeepSeek is emerging as a formidable player, prioritizing cutting-edge research over immediate commercial gains. Founded in 2023, the company has quickly gained attention for its innovative approaches to large language models, challenging the dominance of Silicon Valley giants. Unlike many U.S.-based firms that chase profitability through aggressive monetization, DeepSeek’s strategy emphasizes foundational advancements in AI architecture, drawing praise from industry observers for its long-term vision.

This focus on research has allowed DeepSeek to develop models that excel in efficiency and performance, particularly in training and inference processes. For instance, their proprietary techniques in sparse activation and optimized



Source link

Continue Reading

AI Research

3 ways AI kiosks are rewriting the civic engagement playbook

Published

on


Across the country, public agencies face a common challenge: how to deliver vital services equitably in the face of limited resources, rising expectations, and increasingly diverse populations. 

Traditional government service models — centralized, bureaucratic, and often paper-based — struggle to keep pace with the needs of rural residents, multilingual communities and military families, whose mobility and time constraints demand flexibility.

But a new generation of civic infrastructure is beginning to take shape, one that blends artificial intelligence with physical access points in the communities that need them most. Intelligent self-service kiosks are emerging as a practical tool for expanding access to justice and other essential services, without adding administrative burden or requiring residents to navigate unfamiliar digital portals at home.

El Paso County, Texas, offers one compelling case study. In June 2024, the County launched a network of AI-enabled kiosks that allow residents to complete court-related tasks, from submitting forms and payments to accessing legal guidance, in both English and Spanish. The kiosks are placed in strategic community locations, including the Tigua Indian Reservation and Fort Bliss, enabling access where it’s needed most.

Three lessons from this rollout may prove instructive for government leaders elsewhere:

1. Meet People Where They Are…Literally

Too often, civic access depends on residents coming to centralized locations during limited hours. For working families, rural residents and military personnel, that model simply doesn’t work. 

Placing kiosks in trusted, high-traffic locations like base welcome centers or community annexes removes that barrier and affirms a simple principle: access shouldn’t be an ordeal.

At Fort Bliss, for example, the kiosk allows service members to fulfill court-related obligations without taking leave or leaving the base at all. In just one month, nearly 500 military residents used the kiosk. Meanwhile, over 670 transactions have been completed on the Ysleta del Sur Pueblo (also known as the Tigua Indian Reservation), where access to public transportation is a challenge.

2. Design for Inclusion, Not Just Efficiency

While technology can streamline service delivery, it can also unintentionally exclude those with limited digital literacy or English proficiency. Multilingual A.I. interfaces and accessible user flows are both technical features and equity enablers.

In El Paso County, 20% of kiosk interactions have occurred in Spanish. This uptake highlights the importance of designing systems that reflect the communities they serve, rather than assuming one-size-fits-all access.

3. Think Beyond Digitization and Aim for Democratization

Many digital transformation efforts focus on moving services online, but that shift often leaves behind those without broadband, personal devices, or comfort with navigating complex websites. By embedding smart kiosks in the public realm, governments can provide digital tools without requiring digital privilege.

Moreover, these tools can reduce workload for front-line staff by automating routine transactions, freeing up human workers to focus on complex or high-touch cases. In that way, technology doesn’t replace the human element, it protects and supports it.

The El Paso County model is not the first of its kind, but its thoughtful implementation across geographically and demographically diverse communities offers a replicable roadmap. Other jurisdictions from Miami to Ottawa County, Michigan are piloting similar solutions tailored to local needs.

Ultimately, the path forward isn’t about flashy tech or buzzwords. It’s about pragmatism. It’s about recognizing that trust in government is built not through rhetoric but through responsiveness, and that sometimes, responsiveness looks like a kiosk in a community center that speaks your language and knows what you need.

For public officials considering a similar approach, the advice is simple: start with the barriers your residents face, then work backward. Let inclusion, not efficiency, guide your design. And remember that innovation in public service doesn’t always mean moving faster. Sometimes, it means stopping to ask who’s still being left behind.

Pritesh Bhavsar is the founding technology leader at Advanced Robot Solutions.





Source link

Continue Reading

AI Research

GEAT) Announces Official Re-Launch of Wall Street Stats Mobile Applications with Advanced AI and Machine Learning Features

Published

on


RENO, Nev., Sept. 02, 2025 (GLOBE NEWSWIRE) — GreetEat Corporation (OTC: GEAT), a forward-thinking technology company dedicated to building next-generation platforms, today announced the official re-launch of its subsidiary Wall Street Stats (WallStreetStats.io) applications on both iOS and Android. The updated apps deliver a powerful suite of new tools designed to empower investors with deeper insights, smarter analytics, and a cutting-edge user experience.

The new release introduces an upgraded platform driven by artificial intelligence and machine learning, providing users with:

  • Detailed Quotes & Company Profiles – Comprehensive financial data with intuitive visualization.
  • Summarized Market Intelligence – AI-powered data aggregation and automated summarization for faster decision-making.
  • Sentiment Analysis via Reddit & Social Platforms – Machine learning models that detect, classify, and quantify investor sentiment in real time.
  • Trending Stocks, Top Gainers, Top Losers, and Most Active Lists – AI-curated market movers updated dynamically throughout the day.
  • Smart Watchlists – Personalized watchlists enhanced by predictive analytics and recommendation algorithms.
  • AI-Driven Market Predictions – Leveraging natural language processing (NLP), deep learning, and behavioral pattern recognition to uncover emerging investment opportunities.

“Wall Street Stats was designed to go beyond traditional financial data and offer an AI-first experience that empowers both retail and professional investors,” said Victor Sima, CTO of GreetEat Corporation. “With this re-launch, we’ve combined the best of real-time market intelligence with machine learning powered insights that make data more actionable, intuitive, and predictive. This is just the beginning of our vision to democratize Wall Street – level analytics for everyone.”

The platform’s enhanced features are aimed at giving investors a competitive edge by uncovering hidden patterns, predicting momentum, and providing smarter investment signals. With natural language processing, predictive modeling, and real-time data analytics, Wall Street Stats represents a new era in financial technology innovation.

The applications are now available for download on both the Apple App Store and Google Play Store.

About GreetEat Corporation
GreetEat Corporation (OTC: GEAT) is a technology-driven platform designed to bring people together through virtual dining. Whether for business meetings, celebrations, or personal connections, GreetEat blends video conferencing with meal delivery to create meaningful, shared experiences anywhere in the world. In addition to GreetEat.com, the company also owns WallStreetStats.io, a cutting-edge fintech app that leverages AI and machine learning to analyze social sentiment, market trends, and trading signals in real time, available on both Android and iOS stores.

For Investor Relations or Media Inquiries:

GreetEat Corporation
Email: investors@GreetEat.com
Website: www.GreetEat.com

Connect with GreetEat Corporation

Website: www.GreetEat.com
Website: www.WallStreetStats.io

Follow us on social media:

Follow us on social media:

Download the apps with the below links:

Apple App Store and Google Play Store.

Forward-Looking Statements: This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. These forward-looking statements are based on current expectations, estimates, and projections about the company’s business and industry, management’s beliefs, and certain assumptions made by the management. Such statements involve risks and uncertainties that could cause actual results to differ materially from those in the forward-looking statements. The company undertakes no obligation to update or revise any forward-looking statements, whether as a result of new information, future events, or otherwise.




Source link

Continue Reading

Trending