Connect with us

AI Research

New Empirical Research Report on AI-Driven Drug Repurposing

Published

on


AI-Driven Drug Repurposing Solutions Market

The worldwide “AI-Driven Drug Repurposing Solutions Market” 2025 Research Report presents a professional and complete analysis of the Global AI-Driven Drug Repurposing Solutions Market in the current situation. This report includes development plans and policies along with AI-Driven Drug Repurposing Solutions manufacturing processes and price structures. the reports 2025 research report offers an analytical view of the industry by studying different factors like AI-Driven Drug Repurposing Solutions Market growth, consumption volume, Market Size, Revenue, Market Share, Market Trends, and AI-Driven Drug Repurposing Solutions industry cost structures during the forecast period from 2025 to 2032. It encloses in-depth Research on the AI-Driven Drug Repurposing Solutions Market state and the competitive landscape globally. This report analyzes the potential of the AI-Driven Drug Repurposing Solutions Market in the present and future prospects from various angles in detail.

The global AI-Driven Drug Repurposing Solutions market report is provided for the international markets as well as development trends, competitive landscape analysis, and key region’s development status. Development policies and plans are discussed as well as manufacturing processes and cost structures are also analyzed. This report additionally states import/export consumption, supply and demand Figures, cost, price, revenue, and gross margins. The Global AI-Driven Drug Repurposing Solutions market 2025 research provides a basic overview of the industry including definitions, classifications, applications, and industry chain structure.

A sample report can be viewed by visiting (Use Corporate eMail ID to Get Higher Priority) at: https://www.worldwidemarketreports.com/sample/1032479

The report further explores the key business players along with their in-depth profiling:

BenevolentAI Limited

Insilico Medicine Inc.

Atomwise Inc.

Healx Ltd.

Cyclica Inc.

Recursion Pharmaceuticals Inc.

BioXcel Therapeutics Inc.

Iktos SAS

Standigm Inc.

Berg LLC

Valo Health Inc.

twoXAR Pharmaceuticals Inc.

Evaxion Biotech A/S

Lantern Pharma Inc.

Exscientia plc

Verge Genomics Inc.

Deep Genomics Inc.

Owkin Inc.

Segmentation by Type:

Machine learning platforms

Knowledge graphs

Deep learning models

Segmentation by Applications:

Rare diseases

Oncology

Infectious diseases

To remain ‘ahead’ of your competitors, request a Sample Copy @: https://www.worldwidemarketreports.com/sample/1032479

Report Drivers & Trends Analysis:

The report also discusses the factors driving and restraining market growth, as well as their specific impact on demand over the forecast period. Also highlighted in this report are growth factors, developments, trends, challenges, limitations, and growth opportunities. This section highlights emerging AI-Driven Drug Repurposing Solutions Market trends and changing dynamics. Furthermore, the study provides a forward-looking perspective on various factors that are expected to boost the market’s overall growth.

Competitive Landscape Analysis:

In any market research analysis, the main field is competition. This section of the report provides a competitive scenario and portfolio of the AI-Driven Drug Repurposing Solutions Market’s key players. Major and emerging market players are closely examined in terms of market share, gross margin, product portfolio, production, revenue, sales growth, and other significant factors. Furthermore, this information will assist players in studying critical strategies employed by market leaders in order to plan counterstrategies to gain a competitive advantage in the market.

Key Data Covered in the AI-Driven Drug Repurposing Solutions Market:

✤ CAGR of the Market during the forecast period.

✤ Detailed information on factors that will drive the growth of the Market between 2025 and 2032.

✤ Precise estimation of the size of the Market size and its contribution to the market in focus on the parent market.

✤Accurate predictions about upcoming trends and changes in consumer behavior.

✤ Growth of the Market across APAC, North America, Europe, Middle East and Africa, and South America.

✤ A thorough analysis of the market’s competitive landscape and detailed information about vendors.

✤ Comprehensive analysis of factors that will challenge the growth of the AI-Driven Drug Repurposing Solutions Market vendors.

Here is an overview of the different factual statements covered by the study:

➤ The learn-about consists of an area that breaks down strategic traits in present and upcoming R&D, new product launches, collaborations, regional expansion, and mergers & acquisitions.

➤ The lookup focuses on essential market traits such as revenue, product cost, potential and utilization rates, import/export rates, supply/demand figures, market share, and CAGR.

➤ The learn-about is a series of analyzed records and a variety of barrels of the house bought via a mixture of analytical equipment and an inside look-up process.

➤ The Market can be divided into 4 areas in accordance with the regional breakdown: North American Markets, European Markets, Asian Markets, and the Rest of the World

Reason to Buy:

✅ Save and reduce time carrying out entry-level research by identifying the growth, size, leading players, and segments in the global AI-Driven Drug Repurposing Solutions Market.

✅ Highlights key business priorities in order to guide the companies to reform their business strategies and establish themselves in the wide geography.

✅ The key findings and recommendations highlight crucial progressive industry trends in the AI-Driven Drug Repurposing Solutions Market, thereby allowing players to develop effective long-term strategies in order to garner their market revenue.

✅ Develop/modify business expansion plans by using substantial growth offerings in developed and emerging markets.

✅ Scrutinize in-depth global market trends and outlook coupled with the factors driving the market, as well as those restraining the growth to a certain extent.

✅ Enhance the decision-making process by understanding the strategies that underpin commercial interest with respect to products, segmentation, and industry verticals.

For in-depth competitive analysis, buy now and Get Up-to 70% Discount At: https://www.worldwidemarketreports.com/promobuy/1032479

FAQ’s:

[1] Who are the global manufacturers of AI-Driven Drug Repurposing Solutions, what are their share, price, volume, competitive landscape, SWOT analysis, and future growth plans?

[2] What are the key drivers, growth/restraining factors, and challenges of AI-Driven Drug Repurposing Solutions?

[3] How is the AI-Driven Drug Repurposing Solutions industry expected to grow in the projected period?

[4] How has COVID-19 affected the AI-Driven Drug Repurposing Solutions industry and is there any change in the regulatory policy framework?

[5] What are the key areas of applications and product types of the AI-Driven Drug Repurposing Solutions industry that can expect huge demand during the forecast period?

[6] What are the key offerings and new strategies adopted by AI-Driven Drug Repurposing Solutions players?

Author of this Marketing PR:

Priya Pandey is a dynamic and passionate PR writer with over three years of expertise in content writing and proofreading. Holding a bachelor’s degree in biotechnology, Priya has a knack for making the content engaging. Her diverse portfolio includes writing contents and documents across different industries, including food and beverages, information and technology, healthcare, chemical and materials, etc. Priya’s meticulous attention to detail and commitment to excellence make her an invaluable asset in the world of content creation and refinement.

☎ Contact Us:

Mr. Shah

Worldwide Market Reports,

Tel: U.S. +1-415-871-0703

U.K.: +44-203-289-4040

Australia: +61-2-4786-0457

India: +91-848-285-0837

Email: sales@worldwidemarketreports.com

Website: https://www.worldwidemarketreports.com/

About WMR:

Worldwide Market Reports is global business intelligence firm offering market intelligence report, database, and competitive intelligence reports. We offer reports across various industry domains and an exhaustive list of sub-domains through our varied expertise of consultants having more than 15 years of experience in each industry verticals. With more than 300+ analyst and consultants on board, the company offers in-depth market analysis and helps clients take vital decisions impacting their revenues and growth roadmap.

This release was published on openPR.



Source link

AI Research

Artificial Intelligence (AI) in Radiology Market to Reach USD 4236 Million by 2031 | 9% CAGR Growth Driven by Cloud & On-Premise Solutions

Published

on


Artificial Intelligence in Radiology Market is Segmented by Type (Cloud Based, On-Premise), by Application (Hospital, Biomedical Company, Academic Institution).

BANGALORE, India , July 11, 2025 /PRNewswire/ — The Global Market for Artificial Intelligence in Radiology was valued at USD 2334 Million in the year 2024 and is projected to reach a revised size of USD 4236 Million by 2031, growing at a CAGR of 9.0% during the forecast period.

Claim Your Free Report: https://reports.valuates.com/request/sample/QYRE-Auto-35Z13861/Global_Artificial_Intelligence_in_Radiology_Market

Major Factors Driving the Growth of AI in Radiology Market:

The Artificial Intelligence in Radiology market is rapidly evolving into a cornerstone of modern diagnostic medicine. With its ability to improve accuracy, reduce turnaround time, and support clinical decision-making, AI is transforming radiological practices globally. The market is driven by both technology vendors and healthcare providers looking to optimize imaging workflows and outcomes. Continued innovation, clinical validation, and regulatory alignment are further solidifying AI’s role in the radiology ecosystem. As imaging demands increase and digital health ecosystems mature, AI in radiology is poised for robust growth across both developed and emerging healthcare markets.

Unlock Insights: View Full Report Now! https://reports.valuates.com/market-reports/QYRE-Auto-35Z13861/global-artificial-intelligence-in-radiology

TRENDS INFLUENCING THE GROWTH OF THE ARTIFICIAL INTELLIGENCE (AI) IN RADIOLOGY MARKET:

Cloud-based platforms are significantly accelerating the growth of the Artificial Intelligence (AI) in Radiology market by offering scalable, real-time, and cost-effective infrastructure for medical imaging analysis. These platforms allow radiologists to upload, process, and analyze large volumes of imaging data across locations without investing in expensive on-premise systems. Cloud computing supports collaborative diagnosis and second opinions, making it easier for specialists worldwide to access and interpret radiological findings. AI algorithms hosted on the cloud continuously learn from diverse datasets, improving diagnostic accuracy. Additionally, the cloud simplifies data integration from electronic health records (EHRs), enhancing context-based imaging interpretation. This flexibility and accessibility make cloud-based models ideal for hospitals and diagnostic centers aiming for high-efficiency imaging operations, thereby driving market expansion.

On-premise deployment continues to play a critical role in the growth of the AI in Radiology market, especially for institutions emphasizing strict data security, regulatory compliance, and control. Hospitals with high patient volumes and in-house IT infrastructure often prefer on-premise AI solutions to ensure that sensitive imaging data stays within their private network. These systems offer faster processing speeds due to localized computing, reducing latency in real-time diagnostic decisions. Furthermore, institutions with proprietary imaging protocols benefit from customizable on-premise AI models trained on institution-specific data, enhancing diagnostic relevance. Despite the popularity of cloud solutions, the need for secure, localized, and tailored AI applications sustains strong demand for on-premise setups in high-end academic hospitals and specialized radiology centers.

Biomedical companies are key drivers of growth in the AI in Radiology market by developing next-generation imaging tools that integrate AI to enhance diagnostic performance. These companies are focusing on innovating AI-powered image reconstruction, detection, and segmentation tools that assist radiologists in identifying subtle anomalies with greater precision. Their collaboration with software developers, radiology experts, and hospitals fuels R&D in algorithm refinement and clinical validation. Many biomedical firms are also embedding AI directly into diagnostic hardware, creating intelligent imaging systems capable of real-time interpretation. This vertical integration of hardware and AI enhances efficiency and diagnostic confidence. Their commitment to improving patient outcomes and reducing diagnostic errors ensures consistent market advancement across clinical applications.

One of the major drivers is the rising need for early diagnosis and personalized treatment plans. AI in radiology enables rapid detection of minute anomalies in imaging data, which may be missed by the human eye, especially in early disease stages. This helps clinicians begin treatment sooner, improving patient outcomes. AI systems can also link imaging findings with genomic and clinical data to support tailored therapies. The push for predictive medicine and minimally invasive procedures reinforces the adoption of AI in radiology, particularly in oncology and neurology. As the healthcare industry leans towards precision care, AI becomes indispensable in modern diagnostic workflows.

Radiology departments globally are under immense pressure due to the increasing volume of imaging studies and a shortage of skilled radiologists. AI serves as a supportive solution by automating repetitive tasks like image labeling, prioritizing critical cases, and pre-analyzing scans to reduce turnaround time. This alleviates the burden on radiologists and helps maintain diagnostic quality despite workforce constraints. AI also improves workflow efficiency by integrating with radiology information systems (RIS) and picture archiving and communication systems (PACS). With healthcare systems strained by aging populations and rising chronic diseases, AI tools offer scalable solutions to meet diagnostic demand without compromising accuracy.

Recent progress in deep learning, a subfield of AI, has significantly enhanced the performance of radiology applications. These algorithms can analyze complex imaging patterns with remarkable accuracy and continue to learn from new datasets. With access to large annotated datasets and computing power, deep learning models can now rival or even outperform human radiologists in specific diagnostic tasks like tumor detection or hemorrhage recognition. The continuous refinement of these models is enabling faster, more consistent, and reproducible imaging interpretation. As algorithm transparency and explainability improve, regulatory acceptance and clinical adoption are also growing, driving broader market penetration.

The seamless integration of AI tools into hospital IT infrastructure is driving adoption. Radiology AI applications are now compatible with EHRs, PACS, and RIS, enabling smooth data flow and contextual analysis. This allows AI systems to consider patient history, lab results, and prior imaging during interpretation, thereby increasing diagnostic precision. Automation of report generation and structured data extraction from scans enhances communication between departments and reduces administrative workloads. As healthcare institutions prioritize interoperability and digital transformation, AI tools that fit within existing ecosystems are being widely embraced, contributing to sustained market growth.

The rising incidence of chronic diseases such as cancer, cardiovascular disorders, and neurological conditions is increasing the demand for medical imaging. These diseases require continuous monitoring through modalities like MRI, CT, and ultrasound, which generate large volumes of data. AI helps extract meaningful insights quickly from this data, facilitating timely interventions and longitudinal tracking. For example, AI can compare current and historical scans to detect subtle changes, supporting disease progression analysis. The growing prevalence of these conditions is pushing both private and public healthcare sectors to adopt AI tools that can handle high-frequency imaging needs efficiently.

Claim Yours Now! https://reports.valuates.com/api/directpaytoken?rcode=QYRE-Auto-35Z13861&lic=single-user

AI IN RADIOLOGY MARKET SHARE:

Regionally, North America leads the market due to its advanced healthcare systems, early adoption of AI technologies, and strong presence of leading AI radiology vendors. The U.S. benefits from robust funding, regulatory clarity, and high imaging volumes that support AI deployment.

The Asia-Pacific region is emerging as a key growth hub due to increasing healthcare investments in China, India, and Japan. Additionally, governments in the Middle East and Africa are exploring AI-based solutions to overcome radiologist shortages, gradually contributing to market diversification.

Key Companies:

  • GE
  • IBM
  • GOOGLE INC
  • Philips
  • Amazon
  • Siemens AG
  • NVidia Corporation
  • Intel
  • Bayer(Blackford Analysis)
  • Fujifilm
  • Aidoc
  • Arterys
  • Lunit
  • ContextVision
  • deepcOS
  • Volpara Health Technologies Ltd
  • CureMetrix
  • Densitas
  • QView Medical
  • ICAD

Purchase Regional Report: https://reports.valuates.com/request/regional/QYRE-Auto-35Z13861/Global_Artificial_Intelligence_in_Radiology_Market

SUBSCRIPTION

We have introduced a tailor-made subscription for our customers. Please leave a note in the Comment Section to know about our subscription plans.

DISCOVER MORE INSIGHTS: EXPLORE SIMILAR REPORTS!

–  AI Radiology Software Market

–  The Radiology AI Based Diagnostic Tools Market was valued at USD 2800 Million in the year 2024 and is projected to reach a revised size of USD 11200 Million by 2031, growing at a CAGR of 21.9% during the forecast period.

–  Artificial Intelligence in Medical Device Market

–  AI-Enabled X-Ray Imaging Solutions Market was valued at USD 423 Million in the year 2024 and is projected to reach a revised size of USD 600 Million by 2031, growing at a CAGR of 5.2% during the forecast period.

–  AI-Assisted Digital Radiography Equipment Market

–  Medical Imaging AI Platform Market was valued at USD 2334 Million in the year 2024 and is projected to reach a revised size of USD 4236 Million by 2031, growing at a CAGR of 9.0% during the forecast period.

–  AI-based Medical Diagnostic Tools Market

–  AI Radiology Tool Market

–  Visual Artificial Intelligence Market was valued at USD 13110 Million in the year 2024 and is projected to reach a revised size of USD 26140 Million by 2031, growing at a CAGR of 10.5% during the forecast period.

–  The global Radiology Software market is projected to grow from USD 150 Million in 2024 to USD 223.9 Million by 2030, at a Compound Annual Growth Rate (CAGR) of 6.9% during the forecast period.

–  The Teleradiology Solutions Market was valued at USD 7634 Million in the year 2024 and is projected to reach a revised size of USD 11190 Million by 2031, growing at a CAGR of 5.7% during the forecast period.

DISCOVER OUR VISION: VISIT ABOUT US!

Valuates offers in-depth market insights into various industries. Our extensive report repository is constantly updated to meet your changing industry analysis needs.

Our team of market analysts can help you select the best report covering your industry. We understand your niche region-specific requirements and that’s why we offer customization of reports. With our customization in place, you can request for any particular information from a report that meets your market analysis needs.

To achieve a consistent view of the market, data is gathered from various primary and secondary sources, at each step, data triangulation methodologies are applied to reduce deviance and find a consistent view of the market. Each sample we share contains a detailed research methodology employed to generate the report. Please also reach our sales team to get the complete list of our data sources.

GET A FREE QUOTE

Valuates Reports
sales@valuates.com
For U.S. Toll-Free Call 1-(315)-215-3225
WhatsApp: +91-9945648335

Website: https://reports.valuates.com

Blog: https://valuatestrends.blogspot.com/ 

Pinterest: https://in.pinterest.com/valuatesreports/ 

Twitter: https://twitter.com/valuatesreports 

Facebook: https://www.facebook.com/valuatesreports/ 

YouTube: https://www.youtube.com/@valuatesreports6753 

https://www.facebook.com/valuateskorean 

https://www.facebook.com/valuatesspanish 

https://www.facebook.com/valuatesjapanese 

https://valuatesreportspanish.blogspot.com/ 

https://valuateskorean.blogspot.com/ 

https://valuatesgerman.blogspot.com/ 

市場分析

Logo: https://mma.prnewswire.com/media/1082232/Valuates_Reports_Logo.jpg





Source link

Continue Reading

AI Research

Are AI existential risks real—and what should we do about them?

Published

on


In March 2023, the Future of Life Institute issued an open letter asking artificial intelligence (AI) labs to “pause giant AI experiments.” The animating concern was: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” Two months later, hundreds of prominent people signed onto a one-sentence statement on AI risk asserting that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

This concern about existential risk (“x-risk”) from highly capable AI systems is not new. In 2014, famed physicist Stephen Hawking, alongside leading AI researchers Max Tegmark and Stuart Russell, warned about superintelligent AI systems “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

Policymakers are inclined to dismiss these concerns as overblown and speculative. Despite a focus on AI safety in international AI conferences in 2023 and 2024, policymakers moved away from a focus on existential risks in this year’s AI Action Summit in Paris. For the time being—and in the face of increasingly limited resources—this is all to the good. Policymakers and AI researchers should devote the bulk of their time and energy to addressing more urgent AI risks.  

But it is crucial for policymakers to understand the nature of the existential threat and recognize that as we move toward generally intelligent AI systems—ones that match or surpass human intelligence—developing measures to protect human safety will become necessary. While not the pressing problem alarmists think it is, the challenges of existential risk from highly capable AI systems must eventually be faced and mitigated if AI labs want to develop generally intelligent systems and, eventually, superintelligent ones.  


How close are we to developing AI models with general intelligence? 

AI firms are not very close to developing an AI system with capabilities that could threaten us. This assertion runs against a consensus in the AI industry that we are just years away from developing powerful, transformative systems capable of a wide variety of cognitive tasks. In a recent article, New Yorker staff writer Joshua Rothman sums up this industry consensus that scaling will produce artificial general intelligence (AGI) “by 2030, or sooner.” 

The standard argument prevalent in industry circles was laid out clearly in a June 2024 essay by AI researcher Leopold Aschenbrenner. He argues that AI capabilities increase with scale—the size of training data, the number of parameters in the model, and the amount of compute used to train models. He also draws attention to increasing algorithmic efficiency. Finally, he notes that increased capacities can be “unhobbled” through various techniques such as chain of thought reasoning, reinforcement learning through human feedback, and inserting AI models into larger useful systems. 

Part of the reason for this confidence is that AI improvements seemed to exhibit exponential growth over the last few years. This past growth suggests that transformational capabilities could emerge unexpectedly and quite suddenly. This is in line with some well-known examples of the surprising effects of exponential growth. In “The Age of Spiritual Machines,” futurist Ray Kurzweil tells the story of doubling the number of grains of rice on successive chessboard squares starting with one grain. At the end of 63 doublings there are over 18 quadrillion grains of rice on the last square. The hypothetical example of filling Lake Michigan by doubling (every 18 months) the number of ounces of water added to the lakebed makes the same point. After 60 years there’s almost nothing, but by 80 years there’s 40 feet of water. In five more years, the lake is filled.  

These examples suggest to many that exponential quantitative growth in AI achievements can create imperceptible change that suddenly blossoms into transformative qualitative improvement in AI capabilities.  

But these analogies are misleading. Exponential growth in a finite system cannot go on forever, and there is no guarantee that it will continue in AI development even into the near future. One of the key developments from 2024 is the apparent recognition by industry that training time scaling has hit a wall and that further increases in data, parameters, and compute time produce diminishing returns in capability improvements. The industry apparently hopes that exponential growth in capabilities will emerge from increases in inference time compute. But so far, those improvements have been smaller than earlier gains and limited to science, math, logic, and coding—areas where reinforcement learning can produce improvements since the answers are clear and knowable in advance.  

Today’s large language models (LLMs) show no signs of the exponential improvements characteristic of 2022 and 2023. OpenAI’s GPT-5 project ran into performance troubles and had to be downgraded to GPT-4.5, representing only a “modest” improvement when it was released earlier this year. It made up answers about 37% of the time, which is an improvement over the company’s faster, less expensive GPT-4o model, released last year, which hallucinated nearly 60% of the time. But OpenAI’s latest reasoning systems hallucinate at a higher rate than the company’s previous systems.  

Many in the AI research community think AGI will not emerge from the currently dominant machine learning approach that relies on predicting the next word in a sentence. In a report issued in March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), a professional association of AI researchers established in 1979, reported that 76% of the 475 AI researchers surveyed thought that “scaling up current AI approaches” would be “unlikely” or “very unlikely” to produce general intelligence.  

These doubts about whether current machine learning paradigms are sufficient to reach general intelligence rest on widely understood limitations in current AI models that the report outlines. These limitations include difficulties in long-term planning and reasoning, generalization beyond training data, continual learning, memory and recall, causal and counterfactual reasoning, and embodiment and real-world interaction.  

These researchers think that the current machine learning paradigm has to be supplemented with other approaches. Some AI researchers such as cognitive scientist Gary Marcus think a return to symbolic reasoning systems will be needed, a view that AAAI also suggests.  

Others think the roadblock is the focus on language. In a 2023 paper, computer scientist Jacob Browning and Meta’s Chief AI Scientist Yann LeCun reject the linguistic approach to general intelligence. They argue, “A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.” They recommend approaching general intelligence through machine interaction directly with the environment—“to focus on the world being talked about, not the words themselves.”  

Philosopher Shannon Vallor also rejects the linguistic approach, arguing that general intelligence presupposes sentience and the internal structures of LLMs contain no mechanisms capable of supporting experiences, as opposed to elaborate calculations that mimic human linguistic behavior. Conscious entities at the human level, she points out, desire, suffer, love, grieve, hope, care, and doubt. But there is nothing in LLMs designed to register these experiences or others like it such as pain or pleasure or “what it is like” to taste something or remember a deceased loved one. They are lacking at the simplest level of physical sensations. They have, for instance, no pain receptors to generate the feeling of pain. Being able to talk fluently about pain is not the same as having the capacity to feel pain. The fact that pain can occasionally be experienced in humans without the triggering of pain receptors in cases like phantom limbs in no way supports the idea that a system with no pain receptors at all could nevertheless experience real excruciating pain. All LLMs can do is to talk about experiences that they are quite plainly incapable of feeling for themselves. 

In a forthcoming book chapter, DeepMind researcher David Silver and Turing Award winner Richard S. Sutton endorse this focus on real-world experience as the way forward. They argue that AI researchers will make significant progress toward developing a generally intelligent agent only with “data that is generated by the agent interacting with its environment.” The generation of these real-world “experiential” datasets that can be used for AI training is just beginning. 

A recent paper from Apple researchers suggests that today’s “reasoning” models do not really reason and that both reasoning and traditional generative AI models collapse completely when confronted with complicated versions of puzzles like Tower of Hanoi.  

LeCun probably has the best summary of the prospects for the development of general intelligence. In 2024, he remarked that it “is not going to be an event… It is going to take years, maybe decades… The history of AI is this obsession of people being overly optimistic and then realising that what they were trying to do was more difficult than they thought.”


From general intelligence to superintelligence

Philosopher Nick Bostrom defines superintelligence as a computer system “that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Once AI developers have improved the capabilities of AI models so that it makes sense to call them generally intelligent, how do developers make these systems more capable than humans? 

The key step is to instruct generally intelligent models to improve themselves. Once instructed to improve themselves, however, AI models would use their superior learning capabilities to improve themselves much faster than humans can. Soon, they would far surpass human capacities through a process of recursive self-improvement.  

AI 2027, a recent forecast that has received much attention in the AI community and beyond, relies crucially on this idea of recursive self-improvement. Its key premise is that by the end of 2025, AI agents have become “good at many things but great at helping with AI research.” Once involved in AI research, AI systems recursively improve themselves at an ever-increasing pace and are soon far more capable than humans are.  

Computer scientist I.J. Good noticed this possibility back in 1965, saying of an “ultraintelligent machine” that it “could design even better machines; There would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.” In 1993, computer scientist and science fiction writer Vernor Vinge described this possibility as a coming “technological singularity” and predicted that “Within thirty years, we will have the technological means to create superhuman intelligence.” 


What’s the problem with a superintelligent AI model? 

Generally intelligent AI models, then, might quickly become superintelligent. Why would this be a problem rather than a welcome development?  

AI models, even superintelligent ones, do not do anything unless they are told to by humans. They are tools, not autonomous beings with their own goals and purposes. Developers must build purposes and goals into them to make them function at all, and this can make it seem to users as if they have generated these purposes all by themselves. But this is an illusion. They will do what human developers and deployers tell them to do.  

So, it would seem that creating superintelligent tools that could do our bidding is all upside and without risk. When AI systems become far more capable than humans are, they will be even better at performing tasks that allow humans to flourish. 

But this benign perspective ignores a major unsolved problem in AI research—the alignment problem. Developers have to be very careful what tasks they give to a generally intelligent or superintelligent system, even if it lacks genuine free will and autonomy. If developers specify the tasks in the wrong way, things could go seriously wrong. 

Developers of narrow AI systems are already struggling with the problems of task misspecification and unwanted subgoals. When they ask a narrow system to do something, they sometimes describe the task in a way that the AI system can do what they have been told to do, but not what the developers want them to do. The example of using reinforcement learning to teach an agent to compete in a computer-based race makes the point. If the developers train the agent to accumulate as many game points as possible, they might think they have programmed the system to win the race, which is the apparent objective of the game. It turns out the agent learned instead to accumulate the points without winning the race by going in circles instead of rushing to the end as fast as possible. 

Another example illustrates that AI models can use strategic deception to achieve a goal in ways that researchers did not anticipate. Researchers instructed GPT-4 to log onto a system protected by a CAPTCHA test by hiring a human to do it, without giving it any guidance on how to do this. The AI model accomplished the task by pretending to be a human with vision impairment and tricking a TaskRabbit worker into signing on for it. The researchers did not want the model to lie, but it learned to do this in order to complete the task it was assigned.  

Anthropic’s recent system card for its Sonnet 4 and Opus 4 AI models reveals further misalignment issues, where the model sometimes threatened to reveal a researcher’s extramarital affair if he shut down the system before it had completed its assigned tasks.  

Because these are narrow systems, dangerous outcomes are limited to particular domains if developers fail to resolve alignment problems. Even when the consequences are dire, they are limited in scope.  

The situation is vastly different for generally intelligent and superintelligent systems. This is the point of the well-known paper clip problem described in philosopher Nick Bostrom’s 2014 book, “Superintelligence.” Suppose the goal given to a superintelligent AI model is to produce paper clips. What could go wrong? The result, as described by professor Joshua Gans, is that the model will appropriate resources from all other activities and soon the world will be inundated with paper clips. But it gets worse. People would want to stop this AI, but it is single-minded and would realize that this would subvert its goal. Consequently, the AI would become focused on its own survival. It starts off competing with humans for resources, but now it will want to fight humans because they are a threat. This AI is much smarter than humans, so it is likely to win that battle. 

Yoshua Bengio echoes this crucial concern about dangerous subgoals. Once developers set goals and rewards, a generally intelligent system would “figure out how to achieve these given goals and rewards, which amounts to forming its own subgoals.” The “ability to understand and control its environment” is one such dangerous instrumental goal, while the subgoal of survival creates “the most dangerous scenario.”

Until some progress is made in addressing misalignment problems, developing generally intelligent or superintelligent systems seems to be extremely risky. The good news is that the potential for developing general intelligence and superintelligence in AI models seems remote. While the possibility of recursive self-improvement leading to superintelligence reflects the hope of many frontier AI companies, there is not a shred of evidence that today’s glitchy AI agents are close to conducting AI research even at the level of a normal human technician. This means there is still plenty of time to address the problem of aligning superintelligence with values that make it safe for humans. 

It is not today’s most urgent AI research priority. As AI researcher Andrew Ng is reputed to have said back in 2015, worrying about existential risk might appear to be like worrying about the problem of human overpopulation of Mars.  

Nevertheless, the general problem of AI model misalignment is real and the object of important research that can and should continue. This more mundane work of seeking to mitigate today’s risks of model misalignment might provide valuable clues to dealing with the more distant existential risks that could arise someday in the future as researchers continue down the path of developing highly capable AI systems with the potential to surpass current human limitations.   

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).



Source link

Continue Reading

AI Research

The forgotten 80-year-old machine that shaped the internet – and could help us survive AI

Published

on


Many years ago, long before the internet or artificial intelligence, an American engineer called Vannevar Bush was trying to solve a problem. He could see how difficult it had become for professionals to research anything, and saw the potential for a better way.

This was in the 1940s, when anyone looking for articles, books or other scientific records had to go to a library and search through an index. This meant drawers upon drawers filled with index cards, typically sorted by author, title or subject.

When you had found what you were looking for, creating copies or excerpts was a tedious, manual task. You would have to be very organised in keeping your own records. And woe betide anyone who was working across more than one discipline. Since every book could physically only be in one place, they all had to be filed solely under a primary subject. So an article on cave art couldn’t be in both art and archaeology, and researchers would often waste extra time trying to find the right location.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


This had always been a challenge, but an explosion in research publications in that era had made it far worse than before. As Bush wrote in an influential essay, As We May Think, in The Atlantic in July 1945:

There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialisation extends. The investigator is staggered by the findings and conclusions of thousands of other workers – conclusions which he cannot find time to grasp, much less to remember, as they appear.

Bush was dean of the school of engineering at MIT (the Massachusetts Institute of Technology) and president of the Carnegie Institute. During the second world war, he had been the director of the Office of Scientific Research and Development, coordinating the activities of some 6,000 scientists working relentlessly to give their country a technological advantage. He could see that science was being drastically slowed down by the research process, and proposed a solution that he called the “memex”.

The memex was to be a personal device built into a desk that required little physical space. It would rely heavily on microfilm for data storage, a new technology at the time. The memex would use this to store large numbers of documents in a greatly compressed format that could be projected onto translucent screens.

Most importantly, Bush’s memex was to include a form of associative indexing for tying two items together. The user would be able to use a keyboard to click on a code number alongside a document to jump to an associated document or view them simultaneously – without needing to sift through an index.

Bush acknowledged in his essay that this kind of keyboard click-through wasn’t yet technologically feasible. Yet he believed it would be soon, pointing to existing systems for handling data such as punched cards as potential forerunners.

Woman operating a punched card machine

Punched cards were an early way of storing digital information.
Wikimedia, CC BY-SA

He envisaged that a user would create the connections between items as they developed their personal research library, creating chains of microfilm frames in which the same document or extract could be part of multiple trails at the same time.

New additions could be inserted either by photographing them on to microfilm or by purchasing a microfilm of an existing document. Indeed, a user would be able to augment their memex with vast reference texts. “New forms of encyclopedias will appear,” said Bush, “ready-made with a mesh of associative trails running through them, ready to be dropped into the memex”. Fascinatingly, this isn’t far from today’s Wikipedia.

Where it led

Bush thought the memex would help researchers to think in a more natural, associative way that would be reflected in their records. He is thought to have inspired the American inventors Ted Nelson and Douglas Engelbart, who in the 1960s independently developed hypertext systems, in which documents contained hyperlinks that could directly access other documents. These became the foundation of the world wide web as we know it.

Beyond the practicalities of having easy access to so much information, Bush believed that the added value in the memex lay in making it easier for users to manipulate ideas and spark new ones. His essay drew a distinction between repetitive and creative thought, and foresaw that there would soon be new “powerful mechanical aids” to help with the repetitive variety.

He was perhaps mostly thinking about mathematics, but he left the door open to other thought processes. And 80 years later, with AI in our pockets, we’re automating far more thinking than was ever possible with a calculator.

If this sounds like a happy ending, Bush did not sound overly optimistic when he revisited his own vision in his 1970 book Pieces of the Action. In the intervening 25 years, he had witnessed technological advances in areas like computing that were bringing the memex closer to reality.

Yet Bush felt that the technology had largely missed the philosophical intent of his vision – to enhance human reasoning and creativity:

In 1945, I dreamed of machines that would think with us. Now, I see machines that think for us – or worse, control us.

Bush would die just four years later at the age of 84, but these concerns still feel strikingly relevant today. While it’s great that we do not need to search for a book by flipping through index cards in chests of drawers, we might feel more uneasy about machines doing most of the thinking for us.

A phone screen with AI apps

Just 80 years after Bush proposed the Memex, AIs on smartphones are an everyday thing.
jackpress

Is this technology enhancing and sharpening our skills, or is it making us lazy? No doubt everyone is different, but the danger is that whatever skills we leave to the machines, we eventually lose, and younger generations may not even get the opportunity to learn them in the first place.

The lesson from As We May Think is that a purely technical solution like the memex is not enough. Technology still needs to be human-centred, underpinned by a philosophical vision. As we contemplate a great automation in human thinking in the years ahead, the challenge is to somehow protect our creativity and reasoning at the same time.



Source link

Continue Reading

Trending