AI Insights
Does AI understand? — Harvard Gazette

Imagine an ant crawling in sand, tracing a path that happens to look like Winston Churchill. Would you say the ant created an image of the former British prime minister? According to the late Harvard philosopher Hilary Putnam most people would say no: The ant would need to know about Churchill, and lines, and sand.
The thought experiment has renewed relevance in the age of generative AI. As artificial intelligence firms release ever-more-advanced models that reason, research, create, and analyze, the meanings behind those verbs get slippery fast. What does it really mean to think, to understand, to know? The answer has big implications for how we use AI, and yet those who study intelligence are still reckoning with it.
“When we see things that speak like humans, that can do a lot of tasks like humans, write proofs and rhymes, it’s very natural for us to think that the only way that thing could be doing those things is that it has a mental model of the world, the same way that humans do,” said Keyon Vafa, a postdoctoral fellow at the Harvard Data Science Initiative. “We as a field are making steps trying to understand, what would it even mean for something to understand? There’s definitely no consensus.”
“We as a field are making steps trying to understand, what would it even mean for something to understand? There’s definitely no consensus.”
Keyon Vafa
In human cognition, expression of a thought implies understanding of it, said senior lecturer on philosophy Cheryl Chen. We assume that someone who says “It’s raining” knows about weather, has experienced the feeling of rain on the skin and perhaps the frustration of forgetting to pack an umbrella. “For genuine understanding,” Chen said, “you need to be kind of embedded in the world in a way that ChatGPT is not.”
Still, today’s artificial intelligence systems can seem awfully convincing. Both large language models and other types of machine learning are made of neural networks — computational models that pass information through layers of neurons loosely modeled after the human brain.
“Neural networks have numbers inside them; we call them weights,” said Stratos Idreos, Gordon McKay Professor of Computer Science at SEAS. “Those numbers start by default randomly. We get data through the system, and we do mathematical operations based on those weights, and we get a result.”
He gave the example of an AI trained to identify tumors in medical images. You feed the model hundreds of images that you know contain tumors, and hundreds of images that don’t. Based on that information, can the model correctly determine if a new image contains a tumor? If the result is wrong, you give the system more data, and you tinker with the weights, and slowly the system converges on the right output. It might even identify tumors that doctors would miss.
Keyon Vafa.
Niles Singer/Harvard Staff Photographer
Vafa devotes much of his research to putting AI through its paces, to figure out both what the models actually understand and how we would even know for sure. His criteria come down to whether the model can reliably demonstrate a world model, a stable yet flexible framework that allows it to generalize and reason even in unfamiliar conditions.
Sometimes, Vafa said, it sure seems like a yes.
“If you look at large language models and ask them questions that they presumably haven’t seen before — like, ‘If I wanted to balance a marble on top of an inflatable beach ball on top of a stove pot on top of grass, what order should I put them in?’ — the LLM would answer that correctly, even though that specific question wasn’t in its training data,” he said. That suggests the model does have an effective world model — in this case, the laws of physics.
But Vafa argues the world models often fall apart under closer inspection. In a paper, he and a team of colleagues trained an AI model on street directions around Manhattan, then asked it for routes between various points. Ninety-nine percent of the time, the model spat out accurate directions. But when they tried to build a cohesive map of Manhattan out of its data, they found the model had invented roads, leapt across Central Park, and traveled diagonally across the city’s famously right-angled grid.
“When I turn right, I am given one map of Manhattan, and when I turn left, I’m given a completely different map of Manhattan,” he said. “Those two maps should be coherent, but the AI is essentially reconstructing the map every time you take a turn. It just didn’t really have any kind of conception of Manhattan.”
Rather than operating from a stable understanding of reality, he argues, AI memorizes countless rules and applies them to the best of its ability, a kind of slapdash approach that looks intentional most of the time but occasionally reveals its fundamental incoherence.
Sam Altman, the CEO of OpenAI, has said we will reach AGI — artificial general intelligence, which can do any cognitive task a person can — “relatively soon.” Vafa is keeping his eye out for more elusive evidence: that AIs reliably demonstrate consistent world models — in other words, that they understand.
“I think one of the biggest challenges about getting to AGI is that it’s not clear how to define it,” said Vafa. “This is why it’s important to find ways to measure how well AI systems can ‘understand’ or whether they have good world models — it’s hard to imagine any notion of AGI that doesn’t involve having a good world model. The world models of current LLMs are lacking, but once we know how to measure their quality, we can make progress toward improving them.”
Idreos’ team at the Data Systems Laboratory is developing more efficient approaches so AI can process more data and reason more rigorously. He sees a future where specialized, custom-built models solve important problems, such as identifying cures for rare diseases — even if the models don’t know what disease is. Whether or not that counts as understanding, Idreos said, it certainly counts as useful.
Source link
AI Insights
Here’s what parents need to know about artificial intelligence

ChatGPT, AI chatbots, and the growing world of artificial intelligence: it’s another conversation parents may not have planned on having with their kids.
A new Harvard study found that half of all young adults have already used AI, and younger kids are quickly joining in.
Karl Ernsberger, a former high school teacher turned AI entrepreneur, says that’s not necessarily a bad thing.
“It is here to stay. It’s like people trying to resist the Industrial Revolution,” Ernsberger said.
Ernsberger believes tools like chatbots can be powerful for learning, but only if kids and parents know the limits.
One example is “Rudi the Red Panda,” a virtual character available for free in kids mode on X’s Grok AI. When asked, Rudi can even answer questions about Arizona history.
GROK
“The five C’s of Arizona are Copper, Cotton, Cattle, Citrus, Climate,” Rudi said.
But Ernsberger warns that children may struggle to understand that Rudi isn’t real, and that “friendship” with a chatbot is different from human connection.
“It’s hard for the student to actually develop a real friendship,” he said. “They get confused by that because friendship is something they continue to learn about as they get older.”
When asked if Rudi was really my best friend, it replied: “I’m as real as a red panda can be in your imagination. I’m here to be your best friend.”
That, Ernsberger says, is where parents need to step in.
For families trying to keep kids safe while exploring AI, Ernsberger’s first recommendation is simple.
“Use it yourself. There are so many use cases, so many different things that can be done with AI. Just finding a familiarity with it can help you find the weaknesses for your case, and its weaknesses for your kids.”
Then he says if your child is using AI, be there with them to watch over and keep the human connection.
“The key thing with AI is it’s challenging our ability to connect with each other, that’s a different kind of challenge to society than any other tool we’ve built in the past,” Ernsberger said.
Regulators are paying attention, too.
Arizona Attorney General Kris Mayes, along with 43 other state attorneys general, recently sent a letter to 12 AI companies, including the maker of Rudi, demanding stronger safeguards to protect young users.
AI Insights
This MOSI exhibit will give you a hands-on look at artificial intelligence – Tampa Bay Times
AI Insights
Spain Leads Europe in Adopting AI for Vacation Planning, Study Shows

Spain records higher adoption of Artificial Intelligence – AI in vacation planning than the European average, according to the 2025 Europ Assistance-Ipsos barometer.
The study finds that 20% of Spanish travelers have used AI-based tools to organize or book their holidays, compared with 16% across Europe.
The research highlights Spain as one of the leading countries in integrating digital tools into travel planning. AI applications are most commonly used for accommodation searches, destination information, and itinerary planning, indicating a shift in how tourists prepare for trips.
Growing Use of AI in Travel
According to the survey, 48% of Spanish travelers using AI rely on it for accommodation recommendations, while 47% use it for information about destinations. Another 37% turn to AI tools for help creating itineraries. The technology is also used for finding activities (33%) and booking platform recommendations (26%).
Looking ahead, the interest in AI continues to grow. The report shows that 26% of Spanish respondents plan to use AI in future travel planning, compared with 21% of Europeans overall. However, 39% of Spanish participants remain undecided about whether they will adopt such tools.
Comparison with European Trends
The survey indicates that Spanish travelers are more proactive than the European average in experimenting with AI for holidays. While adoption is not yet universal, Spain’s figures consistently exceed continental averages, underscoring the country’s readiness to embrace new technologies in tourism.
In Europe as a whole, AI is beginning to make inroads into vacation planning but at a slower pace. The 2025 Europ Assistance-Ipsos barometer suggests that cultural attitudes and awareness of technological solutions may play a role in shaping adoption levels across different countries.
Changing Travel Behaviors
The findings suggest a gradual transformation in how trips are organized. Traditional methods such as guidebooks and personal recommendations are being complemented—and in some cases replaced—by AI-driven suggestions. From streamlining searches for accommodation to tailoring activity options, digital tools are expanding their influence on the traveler experience.
While Spain shows higher-than-average adoption rates, the survey also reflects caution. A significant portion of travelers remain unsure about whether they will use AI in the future, highlighting that trust, familiarity, and data privacy considerations continue to influence behavior.
The Europ Assistance-Ipsos barometer confirms that Spain is emerging as a frontrunner in adopting AI for travel planning, reflecting both a strong appetite for digital solutions and an evolving approach to how holidays are designed and booked.
Photo Credit: ProStockStudio / Shutterstock.com
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies