Connect with us

AI Research

Measuring Machine Intelligence Using Turing Test 2.0

Published

on


In 1950, British mathematician Alan Turing (1912–1954) proposed a simple way to test artificial intelligence. His idea, known as the Turing Test, was to see if a computer could carry on a text-based conversation so well that a human judge could not reliably tell it apart from another human. If the computer could “fool” the judge, Turing argued, it should be considered intelligent.

For decades, Turing’s test shaped public understanding of AI. Yet as technology has advanced, many researchers have asked whether imitating human conversation really proves intelligence — or whether it only shows that machines can mimic certain human behaviors. Large language models like ChatGPT can already hold convincing conversations. But does that mean they understand what they are saying?

In a Mind Matters podcast interview, Dr. Georgios Mappouras tells host Robert J. Marks that the answer is no. In a recent paper, The General Intelligence Threshold, Mappouras introduces what he calls Turing Test 2.0. This updated approach sets a higher bar for intelligence than simply chatting like a human. It asks whether machines can go beyond imitation to produce new knowledge.

From information to knowledge

At the heart of Mappouras’s proposal is a distinction between two kinds of information, non-functional vs. functional:

  • Non-functional information is raw data or observations that don’t lead to new insights by themselves. One example would be noticing that an apple falls from a tree.
  • Functional information is knowledge that can be applied to achieve something new. When Isaac Newton connected the falling apple to the force of gravity, he transformed ordinary observation into scientific law.

True intelligence, Mappouras argues, is the ability to transform non-functional information into functional knowledge. This creative leap is what allows humans to build skyscrapers, develop medicine, and travel to the moon. A machine that merely rearranges words or retrieves facts cannot be said to have reached the same level.

The General Intelligence Threshold

Mappouras calls this standard the General Intelligence Threshold. His threshold sets a simple challenge: given existing knowledge and raw information, can the system generate new insights that were not directly programmed into it?

This threshold does not require constant displays of brilliance. Even one undeniable breakthrough — a “flash of genius” — would be enough to demonstrate that a machine possesses general intelligence. Just as a person may excel in math but not physics, a machine would only need to show creativity once to prove its potential.

Creativity and open problems

One way to apply the new test is through unsolved problems in mathematics. Throughout history, breakthroughs such as Andrew Wiles’s proof of Fermat’s Last Theorem or Grigori Perelman’s solution to the Poincaré Conjecture marked milestones of human creativity. If AI could solve open problems like the Riemann Hypothesis or the Collatz Conjecture — problems that no one has ever solved before — it would be strong evidence that the system had crossed the threshold into true intelligence.

Large language models already solve equations and perform advanced calculations, but solving a centuries-old unsolved problem would show something far deeper: the ability to create knowledge that has never existed before.

Beyond symbol manipulation

Mappouras also draws on philosopher John Searle’s famous “Chinese Room” thought experiment. In the scenario, a person who does not understand Chinese sits in a room with a rulebook for manipulating Chinese characters. By following instructions, the person produces outputs that convince outsiders he understands the language, even though he does not.

This scenario, Searle argued, shows that a computer might appear intelligent without real understanding. Mappouras agrees but goes further. For him, real intelligence is proven not just by producing outputs, but by acting on new knowledge. If the instructions in the Chinese Room included a way to escape, the person could only succeed if he truly understood what the words meant. In the same way, AI must demonstrate it can act meaningfully on information, not just shuffle symbols.

Image Credit: top images – Adobe Stock

Can AI pass the new test?

So far, Mappouras does not think modern AI has passed the General Intelligence Threshold. Systems like ChatGPT may look impressive, but their apparent creativity usually comes from patterns in the massive data sets on which they were trained. They have not shown the ability to produce new, independent knowledge disconnected from prior inputs.

That said, Mappouras emphasizes that success would not require constant novelty. One true act of creativity — an undeniable demonstration of new knowledge — would be enough. Until that happens, he remains cautious about claims that today’s AI is truly intelligent.

A shift in the debate

The debate over artificial intelligence is shifting. The original Turing Test asked whether machines could fool us into thinking they were human. Turing Test 2.0 asks a harder question: can they discover something new?

Mappouras believes this is the real measure of intelligence. Intelligence is not imitation — it is innovation. Whether machines will ever cross that line remains uncertain. But if they do, the world will not just be talking with computers. We will be learning from them.

Final thoughts: Today’s systems, tomorrow’s threshold

Models like ChatGPT and Grok are remarkable at conversation, summarization, and problem-solving within known domains, but their strengths still reflect pattern learning from vast training data. By Mappouras’s standard, they will cross the General Intelligence Threshold only when they produce a verifiable breakthrough — an insight not traceable to prior text or human scaffolding, such as an original solution to a major open problem. Until then, they remain powerful imitators and accelerators of human work — impressive, useful, and transformative, but not yet creators of genuinely new knowledge.

Additional Resources

Podcast Transcript Download



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Penn State Altoona professor to launch ‘Metabytes: AI + Humanities Lunch Lab’

Published

on


ALTOONA, Pa. — John Eicher, associate professor of history at Penn State Altoona, will launch the “Metabytes: AI + Humanities Lunch Lab” series on Tuesday, Oct. 7, from noon to 1 p.m. in room 102D of the Smith Building.

As artificial intelligence (AI) systems continue to advance, students need the tools to engage them not only technically, but also intelligently, ethically and creatively. The AI + Humanities Lab will serve as a cross-disciplinary space where humanistic inquiry meets cutting-edge technology, helping students ask the deeper questions that surround this emerging force. By blending hands-on experimentation with philosophical and ethical reflection, the lab aims to give students a critical edge: The ability to see AI not just as a tool, but as a cultural and intellectual phenomenon that requires serious and sober engagement.

Each session will begin with a text, image or prompt shared with an AI model. Participants will then interpret and discuss the responses as philosophical or creative expressions. These activities will ask students to grapple with questions of authority, authenticity, consciousness, choice, empathy, interpretation and what it even means to “understand.”

The lab will run each Tuesday from Oct. 7 through Nov. 18, with the exception of Oct. 14. Sessions are drop-in, open to all and participants may bring their lunch.



Source link

Continue Reading

AI Research

Research: Reviewer Split on Generative AI in Peer Review

Published

on


A new global reviewer survey from IOP Publishing (IOPP) reveals a growing divide in attitudes among reviewers in the physical sciences regarding the use of generative AI in peer review. The study follows a similar survey conducted last year showing that while some researchers are beginning to embrace AI tools, others remain concerned about the potential negative impact, particularly when AI is used to assess their own work.

Currently, IOPP does not allow the use of AI in peer review as generative models cannot meet the ethical, legal, and scholarly standards required. However, there is growing recognition of AI’s potential to support, rather than replace, the peer review process.

Key Findings:

  • 41% of respondents now believe generative AI will have a positive impact on peer review (up 12% from 2024), while 37% see it as negative (up 2%). Only 22% are neutral or unsure—down from 36% last year—indicating growing polarisation in views.
  • 32% of researchers have already used AI tools to support them with their reviews.
  • 57% would be unhappy if a reviewer used generative AI to write a peer review report on a manuscript they had co-authored and 42% would be unhappy if AI were used to augment a peer review report.
  • 42% believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.

Women tend to feel less positive about the potential of AI compared with men, suggesting a gendered difference in the usefulness of AI in peer review. Meanwhile, more junior researchers appear more optimistic about the benefits of AI, compared to their more senior colleagues who express greater scepticism.

When it comes to reviewer behaviour and expectations, 32% of respondents reported using AI tools to support them during the peer review process in some form. Notably, over half (53%) of those using AI said they apply it in more than one way. The most common use (21%) was for editing grammar and improving the flow of text and 13% said they use AI tools to summarise or digest articles under review, raising serious concerns around confidentiality and data privacy. A small minority (2%) admitted to uploading entire manuscripts into AI chatbots asking it to generate a review on their behalf.

Interestingly, 42% of researchers believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.

“These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing. As the technology continues to evolve, so too must the frameworks that support ethical and trustworthy peer review”, said Laura Feetham-Walker, Reviewer Engagement Manager at IOP Publishing and lead author of the study.

“One potential solution is to develop AI tools that are integrated directly into peer review systems, offering support to reviewers and editors without compromising security or research integrity. These tools should be designed to support, rather than replace, human judgment. If implemented effectively, such tools would not only address ethical concerns but also mitigate risks around confidentiality and data privacy; particularly the issue of reviewers uploading manuscripts to third-party generative AI platforms,” adds Feetham-Walker.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.



Source link

Continue Reading

AI Research

Mount Sinai Launches Cardiac Catheterization AI Research Lab

Published

on


Dr. Annapoorna Kini (left) and her team outside of The Samuel Fineman Cardiac Catheterization Artificial Intelligence Research Lab

What You Should Know: 

Mount Sinai Fuster Heart Hospital has announced the launch of The Samuel Fineman Cardiac Catheterization Artificial Intelligence (AI) Research Lab. The new AI lab will use the hospital’s renowned Cardiac Catheterization Lab to advance interventional cardiology and enhance patient care and outcomes.

– Dr. Annapoorna Kini will serve as the Director of the new AI lab. She also directs The Mount Sinai Hospital’s Cardiac Catheterization Lab, which is internationally recognized for its exceptional safety and expertise in complex cases.

Catheterization AI Research Lab Focus

The new lab will focus on many aspects of interventional cardiology, from procedural to educational. Through internal and external collaborations, the lab will explore existing data to gain insights that can significantly impact how healthcare is delivered. AI has the capability to spur new levels of innovation in areas like risk stratification, case planning, and optimizing outcomes.

“While AI is not a magic solution to every problem, there are many places it can make a notable improvement over traditional techniques or bring some approaches that were never possible within reach. In five or so years, we think that many workflows can be augmented by AI to better focus our resources where they are most needed,” says Dr. Kini.

The Samuel Fineman Cardiac Catheterization Artificial Intelligence Research Lab was established in memory of Samuel Fineman, who passed away in 2021. His generous gift was a show of appreciation for the care he received from Dr. Samin K. Sharma.



Source link

Continue Reading

Trending