AI Research
Musk and co should ask an AI what defines intelligence. T…
In 1999, two psychologists, David Dunning and Justin Kruger, came up with an interesting discovery that is now known as the Dunning-Kruger effect. It refers to a cognitive bias where individuals with low ability in a specific area overestimate their skills and knowledge. This occurs because they lack the self-awareness to accurately assess their own competence compared with others. The US president is a textbook example, but so too are many inhabitants of Silicon Valley, especially the more evangelical boosters of AI such as Elon Musk and OpenAI’s Sam Altman.
Both luminaries, for example, are on record as predicting that AGI (artificial general intelligence) may arrive as soon as next year. But when you ask what they mean by that, we find the Dunning-Kruger effect kicking in. For Altman, AGI means “a highly autonomous system that outperforms humans at most economically valuable work”. For Musk, AGI is “smarter than the smartest human”, which boils down to a straightforward intelligence comparison: if an AI system can outperform the most capable humans, it qualifies as AGI.
These are undoubtedly smart cookies. They know how to build machines that work and corporations that one day may make money. But their conceptions of intelligence are laughably reductive, and revealing, too: they’re only interested in economic or performance metrics. It suggests that everything they know about general intelligence (the kind that humans have from birth) could be summarised in 95-point Helvetica Bold on the back of a postage stamp.
In that respect, they’re accurately representative of a tech industry that rebranded machine learning as AI in the hope that it would con mainstream media into believing that a rather mundane but interesting technology was about something really important, namely intelligence, without having to explain what that term actually meant. As a marketing stunt, it turned out to be a stroke of genius. But it also presented a hostage to fortune, because one day some awkward person was going to ask, er, “What exactly is this artificial intelligence of which you speak so glowingly?”
So I fired up Claude, my favourite artificial conversationalist, and put the question to it. “Large language model [LLM] machines like you are described as forms of artificial intelligence. What is the implicit definition of intelligence in this description?”
In replying, Claude was engagingly candid. The implicit definition, it admitted, “is remarkably narrow and reflects several problematic assumptions”. Like what? Well, it implied that intelligence is basically pattern recognition and prediction. “The core assumption is that intelligence equals the ability to identify statistical patterns in data and generate likely next outputs.” LLMs, it continued, represent an implicit belief that intelligence is fundamentally about processing and manipulating symbolic information.
It conceives of intelligence just as the ability to perform well at human-designed tasks. And in a nice touch, the machine admitted that “the framework assumes intelligence can exist independently of physical experience, emotions, social context or embodied learning. It treats intelligence as pure computation that can happen in isolation from the messy realities of lived experience.”
I couldn’t have put it better myself, but there was more. Claude listed key factors that the implicit conception of intelligence in LLMs ignored. They included: wisdom and judgment developed through experience; creative insight that transcends pattern recombination; emotional and social intelligence; intuitive understanding that can’t be verbalised; embodied knowledge learned through physical interaction; and self-awareness and metacognition.
The bit I enjoyed most, though, was the punchline at the end. “The irony”, wrote Claude, “is that by calling LLMs ‘artificial intelligence’, we’re not just mischaracterising what these systems do; we’re also impoverishing our understanding of what human intelligence actually is. We’re essentially defining intelligence down to the narrow slice that current technology can simulate.” And down to what Musk and Altman think it is.
What this interaction brought to mind was the most perceptive metaphor for LLMs that has emerged so far: US psychologist Alison Gopnik’s idea that these things are a new “cultural technology”, which she defines as a tool that allows individuals to take advantage of collective knowledge, skills and information accumulated through human history. She is a world expert on how children learn and has a low opinion of machines in that context. So instead of asking futile questions about whether LLMs are intelligent, she thinks that we should be focusing on how effective they are at transmitting and making accessible the accumulated knowledge of human culture.
My experiment with Claude suggests that the technology is already quite good at that. I didn’t learn anything more than a day’s web browsing, plus a visit to a good library, would have taught me about the strengths and weaknesses of this particular cultural technology. But on the other hand, it took less than an hour, and at least – unlike Musk and Altman – I’m now aware of the extent of my previous ignorance.
Photograph by Kirsty Wigglesworth /AFP via Getty Images
AI Research
The new frontier of medical malpractice
Although the beginnings of modern artificial intelligence (AI) can be traced
as far back as 1956, modern generative AI, the most famous example of which is
arguably ChatGPT, only began emerging in 2019. For better or worse, the steady
rise of generative AI has increasingly impacted the medical field. At this time, AI has begun to advance in a way that creates
potential liability…
AI Research
Pharmaceutical Innovation Rises as Global Funding Surges and AI Reshapes Clinical Research – geneonline.com
AI Research
Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review – Cureus
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business7 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way