Connect with us

AI Research

How LLMs Are Forcing Us to Redefine Intelligence

Published

on


There is an old saying: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. This simple way of reasoning, often linked to the Indiana poet James Whitcomb Riley, has shaped how we think about artificial intelligence for decades. The idea that behavior is enough to identify intelligence inspired Alan Turing’s famous “Imitation Game,” now called the Turing Test.

Turing suggested that if a human cannot tell whether they are conversing with a machine or another human, then the machine can be said to be intelligent. Both the duck test and the Turing test suggest that what matters is not what lies inside a system, but how it behaves. For decades, this test has guided advances in AI. But, with the arrival of large language models (LLMs), the situation has changed. These systems can write fluent text, hold conversations, and solve tasks in ways that feel remarkably human. The question is no longer whether machines can mimic human conversation, but whether this imitation is true intelligence. If a system can write like us, reason like us, and even create like us, should we call it intelligent? Or is behavior alone no longer enough to measure intelligence?

The Evolution of Machine Intelligence

Large language models have changed how we think about AI. These systems, once limited to generating basic text responses, can now solve logic problems, write computer code, draft stories, and even assist with creative tasks like screenwriting. One key development in this progress is their ability to solve complex problems through step-by-step reasoning, a method known as Chain-of-thought reasoning. By breaking down a problem into smaller parts, an LLM can solve complex math problems or logical puzzles in a way that looks similar to human problem-solving. This capability has enabled them to match or even surpass human performance on advanced benchmarks like MATH or GSM8K. Today, LLMs also possess multimodal capabilities. They can work with images, interpret medical scans, explain visual puzzles, and describe complex diagrams. With these advances, the question is no longer whether LLMs can mimic human behavior, but whether this behavior reflects genuine understanding.

Traces of Human-Like Thinking

This success of LLMs is redefining the way we understand intelligence. The focus is shifting from aligning behavior of AI with humans, as suggested by Turing test, to exploring how closely LLMs mirror human thinking in the way they process information (i.e. true human-like thinking). For example, in a recent study, researchers compared the internal workings of AI models with human brain activity. The study found that LLMs with over 70 billion parameters, not only achieved human-level accuracy but also organized information internally in ways that matched human brain patterns.

When both humans and AI models worked on pattern recognition tasks, brain scans showed similar activity patterns in the human participants and corresponding computational patterns in the AI models. The models clustered abstract concepts in their internal layers in ways that directly matched with human brain wave activity. This suggests that successful reasoning might require similar organizational structures, whether in biological or artificial systems.

However, researchers are careful to note the limitations of this work. The study involved a relatively small number of human participants, and humans and machines approached the tasks differently. Humans worked with visual patterns while the AI models processed text descriptions. The correlation between human and machine processing is intriguing, but it does not prove that machines understand concepts the same way humans do.

There are also clear differences in performance. While the best AI models approached human-level accuracy on simple patterns, they showed more dramatic performance drops on the most complex tasks compared to human participants. This suggests that despite similarities in organization, there may still be fundamental differences in how humans and machines process difficult abstract concepts.

The Skeptical Perspective

Despite these impressive findings, a strong argument suggests that the LLMs are nothing more than a very skilled mimic. This view comes from philosopher John Searle’s “Chinese Room” thought experiment which illustrate why behavior may not equal to understanding.

In this thought experiment, Searle asks us to imagine a person locked in a room and can speaks only English. The person receives Chinese symbols and use an English rulebook to manipulate these symbols and produce responses. From outside the room, his responses look exactly like those of a native Chinese speaker. However, Searle argues that the person understands nothing about Chinese. He simply follow rules without any real understanding.

Critics apply this same logic to LLMs. They argue these systems are “stochastic parrots” that generate responses based on statistical patterns in their training data, not genuine understanding. The term “stochastic” refers to their probabilistic nature, while “parrot” emphasizes their imitative behavior without real understanding.

Several technical limitations of LLMs also support this argument. LLMs frequently generate “hallucinations“;  responses that look plausible but completely incorrect, misleading and nonsensical. This happens because they select statistically plausible words rather than consulting an internal knowledge base or understanding truth and falsehood. These models also reproduce human-like errors and biases. They get confused by irrelevant information that humans would easily ignore. They exhibit racial and gender stereotypes because they learned from data containing these biases. Another revealing limitation is “position bias,” where models overemphasize information at the beginning or end of long documents while neglecting the middle content. This “lost-in-the-middle” phenomenon suggests that these systems process information very differently from humans, who can maintain attention across entire documents.

These limitations highlight a central challenge: while LLMs excel at recognizing and reproducing language patterns, this does not mean they truly understand meaning or real-world context. They perform well at handling syntax but remain limited when it comes to semantics.

What Counts as Intelligence?

The debate ultimately comes down to how we define intelligence. If intelligence is the capacity to generate coherent language, solve problems, and adapt to new situations, then LLMs already meet that standard. However, if intelligence requires self-awareness, genuine understanding, or subjective experience, these systems still fall short.

The difficulty is that we lack a clear or objective way to measure qualities like understanding or consciousness. In both humans and machines, we infer them from behavior. The duck test and the Turing Test once provided elegant answers, but in the age of LLMs, they may no longer suffice. Their capabilities force us to reconsider what truly counts as intelligence and whether our traditional definitions are keeping pace with technological reality.

The Bottom Line

Large language models challenge how we define AI intelligence. They can mimic reasoning, generate ideas, and perform tasks once seen as uniquely human. Yet they lack the awareness and grounding that shape true human-like thinking. Their rise forces us to ask not only whether machines act intelligently, but what intelligence itself really means.



Source link

AI Research

From pilot to profitability: How to approach enterprise AI adoption

Published

on


From central authority to shared ownership

In conversations with other IT leaders, I’ve noticed a common pattern in how AI programs evolve. Most began with a centralized team — a logical first step to establish standards, consistency and a safe space for early experiments. But over time, it became clear that no central group could keep pace with every business request or understand each domain deeply enough to deliver the best solutions.

Many organizations have since shifted toward a hub-and-spoke model. The hub — often an AI center of excellence — takes responsibility for governance, education, best practices and the technically complex use cases. The spokes, led by product or functional teams, experiment with AI features embedded in the tools they use every day. Because they’re closer to the business, these teams can test, iterate and deliver solutions at speed.

When I look across industries, the majority of AI innovation is now happening at the edge, not the center. That’s largely because so much intelligence is already embedded into enterprise software. A CRM platform, for instance, might now offer AI-based lead scoring or predictive churn models — capabilities a team can enable and deploy with little to no involvement from the center of excellence.



Source link

Continue Reading

AI Research

Larry Ellison Institute gives Oxford £118 million for AI vaccine research

Published

on


The Ellison Institute of Technology (EIT) is funding an Oxford vaccine research project that will tackle pathogenic diseases using AI.

Ellison, who recently overtook Elon Musk as the world’s richest man, is giving Oxford £118 million for the programme, which will be led by the Oxford Vaccine Group. 

Professor Sir Andrew Pollard, director of the group that led COVID-19 trials, described the programme as a “new frontier in vaccine science”. Scientists will use “human challenge models”, where volunteers are safely exposed to bacteria under controlled conditions and AI tools to identify immune responses that predict protection.

Oxford’s Vice-Chancellor, Irene Tracey, described the project as a “major step forward” in the strategic alliance with the Ellison Institute. She explained that her vision is to draw “more talent and capacity to the Oxford ecosystem to turn scientific challenges into real solutions for the world”.

EIT is designed to host 7,000 scientists, including an oncology clinic, auditorium, laboratories, library, classrooms, and park space. Oxford University, by comparison, has 5,000 research staff.

The Institute has already faced leadership turbulence, with the President, John Bell, resigning days before the vaccine project was announced. Bell was pictured signing the contracts with Irene Tracey when the “strategic alliance” was first announced in December 2024. Bell publicly endorsed Lord Hague in the Chancellor election last year.

The Wall Street Journal reported that Bell clashed with Ellison over operations and staffing, and that tensions flared over the mix of people being brought into the Institute, as well as Ellison’s decisions to fire senior staff without involving him.

Bell, who was Regius Professor of Medicine at Oxford until March last year, also serves as chair of Our Future Health, a government-funded project to genetically test millions of patients. He holds over £700,000 of shares in Roche, a pharmaceutical company where he sat on the board for 20 years, which has drawn criticism from genomics-monitoring groups for the “conflict of interest”.

Despite these controversies surrounding Bell’s various roles, a University spokesperson told Cherwell: “We recognise his pivotal contribution in helping to establish the Institute and in attracting outstanding researchers to its mission.” 

Bell belonged to the Institute’s Faculty of Fellows alongside former Prime Minister, Tony Blair. Tony Blair’s own Institute for Global Change (TBI) is bankrolled by Ellison. As well as sharing their source of funding, the Ellison-funded institutes work in collaboration on an “AI for Governments” project.

Larry Ellison amassed his billions as boss of tech-giant Oracle, where he has made headlines for suggesting that Oracle would pioneer “AI mass surveillance”, as well as for his friendship with Israeli Prime Minister Benjamin Netanyahu, whom he offered a job at Oracle. Ellison donated to the Israeli military through Friends of the Israel Defense Forces, giving the organisation $16.6 million in 2017.

Ellison also reportedly has a close relationship with Trump, attending meetings in the Oval Office. Trump has questioned the effectiveness of COVID-19 vaccines, which were pioneered by the same Oxford Vaccine Group that are partnering with Ellison’s institute on this project. 

Responding to Ellison’s ties to vaccine-sceptic politicians, as well as questions over the ownership of intellectual property (IP) stemming from the strategic alliance, the University spokesperson told Cherwell that it ensures any external partnerships “align with the University’s public mission, including by realising impact from our academic research”. 

Further details on the ownership or management of intellectual property arising from the programme have not been made public.



Source link

Continue Reading

AI Research

Drone Cybersecurity Research Report 2025-2034: AI-Powered

Published

on


Dublin, Sept. 12, 2025 (GLOBE NEWSWIRE) — The “Drone Cybersecurity Market – A Global and Regional Analysis: Focus on Components, Drone Type, Application, and Regional Analysis – Analysis and Forecast, 2025-2034” report has been added to ResearchAndMarkets.com’s offering.

The drone cybersecurity market forms a critical segment of the broader UAV and cybersecurity ecosystem. Advances in sensor technology, encrypted communication, AI-driven analytics, and blockchain integration are reshaping how drones mitigate cyber risks. Drone cybersecurity solutions encompass software, hardware, and managed services that collectively safeguard UAV operations against GPS spoofing, signal jamming, data breaches, and unauthorized control.

The market benefits from substantial investments in research and development aimed at enhancing threat detection accuracy, minimizing latency, and securing over-the-air firmware updates. Regulatory frameworks, particularly in the U.S., Europe, and Asia-Pacific regions, are driving increased adoption of cybersecurity measures, compelling manufacturers and operators to comply with stringent standards. This regulatory emphasis fuels innovation in drone cybersecurity market offerings, including autonomous defense features and comprehensive incident response services.

Global Drone Cybersecurity Market Lifecycle Stage

Currently, the drone cybersecurity market is in a high-growth phase, propelled by accelerating UAV deployments in sectors such as agriculture, defense, infrastructure inspection, and logistics. Key technologies have matured to advanced readiness levels, supporting broad implementation. North America commands a significant market share due to substantial defense spending and proactive regulatory policies, while the Asia-Pacific region demonstrates rapid adoption driven by commercial applications and government initiatives.

Collaborative ventures between cybersecurity firms, drone manufacturers, and government agencies are essential to delivering integrated security solutions. Market dynamics are influenced by evolving cyber threat landscapes, emerging drone use cases, and advancements in AI and machine learning. The drone cybersecurity market is forecast to maintain strong momentum over the next decade, supported by continuous technological innovation and increased prioritization of UAV security in global drone operations.

Drone Cybersecurity Market Key Players and Competition Synopsis

The drone cybersecurity market exhibits a dynamic and competitive environment driven by leading technology firms and innovative cybersecurity solution providers specializing in unmanned aerial vehicle (UAV) security. Major global players such as Airbus Defence and Space, DroneShield, and Raytheon Technologies are pivotal in advancing drone cybersecurity technologies. These companies focus on developing sophisticated threat detection systems, secure communication protocols, anti-jamming hardware, and AI-powered anomaly detection tools tailored to protect drones from evolving cyber threats.

Alongside established leaders, emerging startups contribute innovative solutions addressing niche vulnerabilities and enabling real-time response capabilities. Competition within the drone cybersecurity market is intensified by strategic partnerships, continuous innovation, regulatory compliance demands, and increasing drone adoption across defense, commercial, and governmental sectors. As the drone cybersecurity market expands, players prioritize scalable, interoperable, and cost-effective security solutions that meet diverse operational requirements globally.

Demand Drivers and Limitations

The following are the demand drivers for the drone cybersecurity market:

  • Growing drone use in critical applications
  • Increasing sophistication of cyberattacks on UAVs
  • Strict regulatory cybersecurity requirements

The drone cybersecurity market is expected to face some limitations as well due to the following challenges:

  • High implementation costs
  • Technology outpacing security solutions

Some prominent names established in the drone cybersecurity market are:

  • Airbus Defence and Space
  • Palo Alto Networks
  • Airspace Systems
  • Boeing Defense, Space & Security
  • BAE Systems plc
  • DroneShield
  • DroneSec
  • Fortem Technologies
  • Raytheon Technologies
  • Israel Aerospace Industries Ltd. (IAI)
  • General Dynamics Corporation

Key Attributes:

Report Attribute Details
No. of Pages 140
Forecast Period 2025 – 2034
Estimated Market Value (USD) in 2025 $2.91 Billion
Forecasted Market Value (USD) by 2034 $13.19 Billion
Compound Annual Growth Rate 18.2%
Regions Covered Global

Key Topics Covered:

1. Markets: Industry Outlook
1.1 Trends: Current and Future Impact Assessment
1.2 Market Dynamics Overview
1.2.1 Market Drivers
1.2.2 Market Restraints
1.2.3 Market Opportunities
1.3 Impact of Regulatory and Environmental Policies
1.4 Patent Analysis
1.4.1 By Year
1.4.2 By Region
1.5 Technology Trends and Innovations
1.6 Cyber Threats and Risk Assessment
1.7 Investment Landscape and R&D Trends
1.8 Value Chain Analysis
1.9 Industry Attractiveness

2. Global Drone Cybersecurity Market (by Components)
2.1 Software
2.2 Hardware
2.3 Services

3. Global Drone Cybersecurity Market (by Drone Type)
3.1 Fixed Wing
3.2 Rotary Wing
3.3 Hybrid

4. Global Drone Cybersecurity Market (by Application)
4.1 Manufacturing
4.2 Military and Defense
4.3 Agriculture
4.4 Logistics and Transportation
4.5 Surveillance and Monitoring
4.6 Others

5. Global Drone Cybersecurity Market (by Region)
5.1 Global Drone Cybersecurity Market (by Region)
5.2 North America
5.2.1 Regional Overview
5.2.2 Driving Factors for Market Growth
5.2.3 Factors Challenging the Market
5.2.4 Key Companies
5.2.5 Components
5.2.6 Drone Type
5.2.7 Application
5.2.8 North America (by Country)
5.2.8.1 U.S.
5.2.8.1.1 Market by Components
5.2.8.1.2 Market by Drone Type
5.2.8.1.3 Market by Application
5.2.8.2 Canada
5.2.8.2.1 Market by Components
5.2.8.2.2 Market by Drone Type
5.2.8.2.3 Market by Application
5.2.8.3 Mexico
5.2.8.3.1 Market by Components
5.2.8.3.2 Market by Drone Type
5.2.8.3.3 Market by Application
5.3 Europe
5.4 Asia-Pacific
5.5 Rest-of-the-World

6. Competitive Benchmarking & Company Profiles
6.1 Next Frontiers
6.2 Geographic Assessment
6.3 Company Profiles
6.3.1 Overview
6.3.2 Top Products/Product Portfolio
6.3.3 Top Competitors
6.3.4 Target Customers
6.3.5 Key Personnel
6.3.6 Analyst View
6.3.7 Market Share

For more information about this report visit https://www.researchandmarkets.com/r/mhm1qg

About ResearchAndMarkets.com
ResearchAndMarkets.com is the world’s leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.


            



Source link

Continue Reading

Trending