Connect with us

AI Insights

What is AGI? Nobody agrees, and it’s tearing Microsoft and OpenAI apart.

Published

on


The reported $100 billion profit threshold we mentioned earlier conflates commercial success with cognitive capability, as if a system’s ability to generate revenue says anything meaningful about whether it can “think,” “reason,” or “understand” the world like a human.

Sam Altman speaks onstage during The New York Times Dealbook Summit 2024 at Jazz at Lincoln Center on December 4, 2024, in New York City.


Credit:

Eugene Gologursky via Getty Images


Depending on your definition, we may already have AGI, or it may be physically impossible to achieve. If you define AGI as “AI that performs better than most humans at most tasks,” then current language models potentially meet that bar for certain types of work (which tasks, which humans, what is “better”?), but agreement on whether that is true is far from universal. This says nothing of the even murkier concept of “superintelligence”—another nebulous term for a hypothetical, god-like intellect so far beyond human cognition that, like AGI, it defies any solid definition or benchmark.

Given this definitional chaos, researchers have tried to create objective benchmarks to measure progress toward AGI, but these attempts have revealed their own set of problems.

Why benchmarks keep failing us

The search for better AGI benchmarks has produced some interesting alternatives to the Turing Test. The Abstraction and Reasoning Corpus (ARC-AGI), introduced in 2019 by François Chollet, tests whether AI systems can solve novel visual puzzles that require deep and novel analytical reasoning.

“Almost all current AI benchmarks can be solved purely via memorization,” Chollet told Freethink in August 2024. A major problem with AI benchmarks currently stems from data contamination—when test questions end up in training data, models can appear to perform well without truly “understanding” the underlying concepts. Large language models serve as master imitators, mimicking patterns found in training data, but not always originating novel solutions to problems.

But even sophisticated benchmarks like ARC-AGI face a fundamental problem: They’re still trying to reduce intelligence to a score. And while improved benchmarks are essential for measuring empirical progress in a scientific framework, intelligence isn’t a single thing you can measure, like height or weight—it’s a complex constellation of abilities that manifest differently in different contexts. Indeed, we don’t even have a complete functional definition of human intelligence, so defining artificial intelligence by any single benchmark score is likely to capture only a small part of the complete picture.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Varo Bank Appoints Asmau Ahmed as Chief Artificial Intelligence Officer to Drive AI Innovation

Published

on


Varo Bank has hired Asmau Ahmed as its first Chief Artificial Intelligence and Data Officer (CAIDO) to lead company-wide AI and machine-learning efforts. Ahmed has over 20 years of experience in leading teams and delivering products at Google X, Bank of America, Capital One, and Deloitte. She will focus on advancing Varo’s mission-driven tech evolution and improving customers’ financial experiences through AI. Varo uses AI to enhance its credit-decisioning processes, and Ahmed’s expertise will help guide future institution-wide advancements in AI.

Title: Varo Bank Appoints Asmau Ahmed as Chief Artificial Intelligence and Data Officer

Varo Bank, the first all-digital nationally chartered bank in the U.S., has announced the hiring of Asmau Ahmed as its first Chief Artificial Intelligence and Data Officer (CAIDO). Ahmed, who brings over 20 years of expertise in innovation from Google, Bank of America, and Capital One, will lead the company’s AI and machine-learning efforts, reporting directly to CEO Gavin Michael [1].

Ahmed’s appointment comes as Varo Bank continues to leverage AI to enhance its core functions. The bank has expanded credit access by using data and advanced machine learning-driven decisioning, reinforcing its mission of advancing financial inclusion with technology. The Varo Line of Credit, launched in 2024, uses self-learning models to improve its credit-decisioning processes based on proprietary algorithms, allowing some customers with reliable Varo banking histories access to loans that traditional credit score systems would have excluded [1].

Ahmed’s extensive experience includes leading technology, portfolio, and customer-facing product teams at Bank of America and Capital One, as well as co-leading the Digital Innovation team at Deloitte. She has also founded a visual search advertising tech company, Plum Perfect. Her expertise will be instrumental in guiding Varo Bank’s future advancements in AI.

“As a nationally-chartered bank, Varo is able to use data and AI in an innovative way that stands out across the finance industry,” said Ahmed. “Today we are applying machine learning for underwriting, as well as fraud prevention and detection. I am thrilled to lead the next phase of Varo’s mission-driven tech evolution and ensure AI can improve our customers’ experiences and financial lives” [1].

Varo Bank’s AI and data science efforts are designed to enhance various core functions of the company’s tech stack. The appointment of Ahmed as CAIDO underscores the bank’s commitment to leveraging AI to improve customer experiences and financial outcomes.

References

[1] https://www.businesswire.com/news/home/20250904262245/en/Varo-Bank-to-Accelerate-Responsible-and-Customer-Focused-AI-Efforts-with-New-Chief-Artificial-Intelligence-Officer-Asmau-Ahmed



Source link

Continue Reading

AI Insights

Guest column—University of Tennessee “Embraces” Artificial Intelligence, Downplays Dangers – The Pacer

Published

on


At the end of February, the University of Tennessee Board of Trustees adopted its first artificial intelligence policy.

The board produced its policy statement with little attempt to engage faculty and students in meaningful discussions about the serious problems that may arise from AI.

At UT Martin, the Faculty Senate approved the board’s policy statement in late April, also without significant input from faculty or students.

In Section V of the document, “Policy Statement and Guiding Principles,” the first subsection states: “UT Martin embraces the use of AI as a powerful tool for the purpose of enhancing human learning, creativity, analysis, and innovation within the academic context.”

The document notes potential problems such as academic integrity, the compromise of intellectual property rights and the security of protected university data. But it does not address what may be the most dangerous and most likely consequence of AI’s rapid growth: the limiting of human learning, creativity, analysis and innovation.

Over the past two years, faculty in the humanities have seen students increasingly turn to AI, even for low-stakes assignments. AI allows students to bypass the effort of trying to understand a reading.

If students attempt a difficult text and struggle to make sense of it, they can ask AI to explain. More often, however, students skip reading altogether and ask AI for a summary, analysis or other grade-directed answers.

In approaching a novel, a historical narrative or even the social realities of our own time, readers start with limited knowledge of the characters, events or forces at play. To understand a character’s motives, the relationship between events, or the social, economic and political interests driving them, we must construct and refine a mental image—a hypothesis—through careful reading.

This process is the heart of education. Only by grappling with a text, a formula or a method for solving a problem do we truly learn. Without that effort, students may arrive at the “right” answer, but they have not gained the tools to understand the problems they face—or to live morally and intelligently in the world.

As complex as a novel or historical narrative may be, the real world is far more complex. If we rely on AI’s interpretation instead of building our own understanding, we deprive ourselves of the skills needed to engage with that complexity.

UT Martin’s mission statement says: “The University of Tennessee at Martin educates and engages responsible citizens to lead and serve in a diverse world.” Yet we fail this mission in many ways. Most students do not follow current events and are unaware of pressing issues. Few leave the university with a love of reading, despite its importance to responsible citizenship.

With this new AI policy, the university risks compounding these failures by embracing a technology that may further erode students’ ability to think critically about the world around them.



Source link

Continue Reading

AI Insights

Artificial intelligence can predict risk of heart attack – The Anniston Star

Published

on



Artificial intelligence can predict risk of heart attack  The Anniston Star



Source link

Continue Reading

Trending