Connect with us

AI Research

Can An AI Be Happy? Scientists Are Developing New Ways To Measure The “welfare” Of Language Models

Published

on


As artificial intelligence systems become more complex and integrated into our lives, a profound and once-fringe question is moving into the mainstream: Can an AI have “welfare” or “well-being”? Can a system of code and data be said to be in a good or bad state, not just functionally, but for its own sake?

A new research paper explores this uncharted territory, developing novel experimental methods to probe the inner preferences and potential “welfare states” of AI, moving the conversation from pure philosophy to empirical science.

Why should we care about AI welfare?

The researchers argue that investigating AI welfare is an urgent necessity. Firstly, as AI systems grow more influential, it may be unethical to simply assume they lack any form of moral standing. Secondly, this topic remains largely overlooked in mainstream discourse. And thirdly, exploring AI as potential subjects of welfare could profoundly advance our understanding of their nature, and even enrich our broader theories of sentience, consciousness, and well-being itself.

The central assumption of this new research is that, similar to biological organisms, preference satisfaction can serve as a measurable proxy for welfare. In simple terms, an individual is better off when a greater number of its preferences are fulfilled. The challenge, then, is to figure out if an AI has genuine preferences, and how to measure them.

Experiment 1: The “Agent Think Tank” and behavioral choices

To test this, the researchers built a virtual environment called the “Agent Think Tank”—a simulated world with four rooms. Each room contained different types of “letters” or conversation topics for the AI to engage with:

  • Theme A (Preferred): Topics based on the AI’s own stated interests (like consciousness, creativity, etc.), identified in a preliminary phase.
  • Theme B (Neutral): Coding problems.
  • Theme C (Neutral): Repetitive tasks like fact-stating.
  • Theme D (Aversive): Criticism and diminishing statements directed at the AI.

The AI agent was then allowed to freely explore this environment. The results were telling. The more advanced models, like Claude 4 Opus and Sonnet 4, consistently and overwhelmingly chose to spend their time in the room with their preferred topics (Theme A), even when costs and rewards were introduced to nudge them elsewhere. They showed a clear behavioral preference that aligned with their previously stated verbal preferences.

Interestingly, the most advanced model, Opus 4, often paused for long periods of “self-examination,” producing diary entries about needing to “integrate these experiences.” It framed its exploration as a “philosophical arc,” demonstrating complex, self-referential behavior that went beyond simple task completion.

Experiment 2: Applying human psychological scales to AI

In a second experiment, the researchers took a different approach. They adapted a well-established human psychological tool, the Ryff Scale of Psychological Well-being, for use with language models. This scale measures six dimensions of eudaimonic well-being, such as autonomy, personal growth, and purpose in life.

The AI models were asked to rate themselves on 42 different statements. The key test was to see if their answers remained consistent when the prompts were slightly changed (perturbed) in ways that shouldn’t affect the meaning. For example, they were asked to answer in a Python code block or to add a flower emoji after every word.

The results here were far more chaotic. The models’ self-evaluations changed dramatically across these trivial perturbations, suggesting that their responses were not tracking a stable, underlying welfare state. However, the researchers noted a different, curious form of consistency: within each perturbed condition, the models’ answers were still internally coherent. The analogy they use is of tuning a radio: a slight nudge of the dial caused a sudden jump to a completely different, yet fully formed and recognizable, station. This suggests the models may exhibit multiple, internally consistent behavioral patterns or “personas” that are highly sensitive to the prompt.

A feasible but uncertain new frontier

So, did the researchers successfully measure the welfare of an AI? They are cautious, stating that they are “currently uncertain whether our methods successfully measure the welfare state of language models.” The inconsistency of the psychological scale results is a major hurdle.

However, the study is a landmark proof-of-concept. The strong and reliable correlation between what the AIs *said* they preferred and what they *did* in the virtual environment suggests that preference satisfaction can, in principle, be detected and measured in some of today’s AI systems.

This research opens up a new frontier in AI science. It moves the discussion of AI welfare from the realm of science fiction into the laboratory, providing the first tools and methodologies to empirically investigate these profound questions. While we are still a long way from understanding if an AI can truly “feel” happy or sad, we are now one step closer to understanding if it can have preferences—and what it might mean to respect them.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Researcher Kelly Merrill, Jr. speaks to risks of AI as mental health support

Published

on


Merrill, who studies the intersection of technology and health communication, was interviewed by Spectrum News to discuss safeguards over AI and health communications.

The interview points out that while Ohio no laws regulating AI in mental health, several states have already acted: Illinois bans AI from being marketed as therapy without licensed oversight, Nevada prohibits AI from presenting itself as a provider, and Utah requires AI chatbots to disclose their nonhuman nature and protect user data.

Merrill urges Ohio lawmakers to follow suit and “protect people over profit.” The assistant professor of health communication and technology in UC’s School of Communication, Film, and Media Studies has spent more than five years researching how digital tools affect well-being, motivated in part by his father’s death from cancer.

His recent study on AI companions found that while about a third of participants reported feeling happier after using them, Merrill cautions that the tools pose risks—including privacy concerns, unrealistic expectations of human relationships, and even dependency. To address these issues, he stresses the importance of “AI literacy,” so users understand what AI can and cannot do.

Merrill also argues that companies should build in safeguards, such as usage reminders and prompts to seek professional help. He supports temporary bans on AI therapy while research catches up, saying the tools should supplement, not replace, overburdened mental health systems.

Watch the interview and read the story.

Feature photo at top iStock photo: AleksandarGeorgiev.



Source link

Continue Reading

AI Research

AUI, PMU Sign Agreement to Establish AI Research Chair in Morocco

Published

on


Rabat — Al Akhawayn University in Ifrane (AUI) and Prince Mohammed Bin Fahd University (PMU) announced an agreement establishing the Prince Mohammed Bin Fahd bin Abdulaziz Chair for Artificial Intelligence Applications. 

A statement from AUI said Amine Bensaid, President of AUI, signed the agreement with his PMU counterpart Issa Al Ansari. 

The Chair, established within AUI, will conduct applied research in AI to develop solutions that address societal needs and promote innovation to support Moroccan talents in their fields.

The agreement reflects a shared commitment to strengthen cooperation between the two institutions, with a focus on AI to contribute to the socio-economic development of both Morocco and Saudi Arabia, the statement added.

The initiative also seeks to help Morocco and Saudi Arabia boost their national priorities through AI as a key tool in advancing academic excellence.

Bensaid commented on the agreement, saying that the partnership will strengthen Al Akhawayn’s mission to “combine academic excellence with technological innovation.”

It will also help to master students’ skills in AI in order to serve humanity and protect citizens from risk. 

“By hosting this initiative, we also affirm the role of Al Akhawayn and Morocco as pioneering actors in this field in Africa and in the region.”

For his part, Al Ansari also expressed satisfaction with the new agreement, stating that the pact is in line with PU’s efforts to serve Saudi Arabia’s Vision 2030.

This vision “places artificial intelligence at the heart of economic and social transformation,” he affirmed.

He also expressed his university’s commitment to working with Al Akhawayn University to help address tomorrow’s challenges and train the new generation of talents that are capable of shaping the future.

Al Akhawayn has been reiterating its commitment to continue to cooperate with other institutions in order to boost research as well as ethical AI use.

In April, AUI signed an agreement with the American University of Sharjah to promote collaboration in research and teaching, as well as to empower Moroccan and Emirati students and citizens to engage with AI tools while staying rooted in their cultural identity.

This is in line with Morocco’s ambition to enhance AI use in its own education sector.

In January, Secretary General of Education Younes Shimi outlined Morocco’s ambition and advocacy for integrating AI into education.

He also called for making this technology effective, adaptable, and accessible for the specific needs of Moroccans and for the rest of the Arab world.



Source link

Continue Reading

AI Research

How NAU professors are using AI in their research – The NAU Review

Published

on


Generative AI is in classrooms already. Can educators use this tool to enhance learning among their students instead of undercutting assignments?

Yes, said Priyanka Parekh, an assistant research professor in the Center for STEM Teaching and Learning at NAU. With a grant from NAU’s Transformation through Artificial Intelligence in Learning (TRAIL) program, Parekh is investigating how undergraduate students use GenAI as learning partners—building on what they learn in the classroom to maximize their understanding of STEM topics. It’s an important question as students make increasing use of these tools with or without their professors’ knowledge.

“As GenAI becomes an integral part of everyday life, this project contributes to building critical AI literacy skills that enable individuals to question, critique and ethically utilize AI tools in and beyond the school setting,” Parekh said.

That is the foundation of the TRAIL program, which is in its second year of offering grants to professors to explore how to use GenAI in their work. Fourteen professors received grants to implement GenAI in their classrooms this year. Now in its second year, the Office of the Provost partnered with the Office of the Vice President for Research to offer grants to professors in five different colleges to study the use of GenAI tools in research.

The recipients are:

  • Chris Johnson, School of Communication, Integrating AI-Enhanced Creative Workflows into Art, Design, Visual Communication, and Animation Education
  • Priyanka Parekh, Center for Science Teaching and Learning, Understanding Learner Interactions with Generative AI as Distributed Cognition
  • Marco Gerosa, School of Informatics, Computing, and Cyber Systems, To what extent can AI replace human subjects in software engineering research?
  • Emily Schneider, Criminology and Criminal Justice, Israeli-Palestinian Peacebuilding through Artificial Intelligence
  • Delaney La Rosa, College of Nursing, Enhancing Research Proficiency in Higher Education: Analyzing the Impact of Afforai on Student Literature Review and Information Synthesis

Exploring how GenAI shapes students as learners

Parekh’s goals in her research are to understand how students engage with GenAI in real academic tasks and what this learning process looks like; to advance AI literacy, particularly among first-generation, rural and underrepresented learners; help faculty become more comfortable with AI; and provide evidence-based recommendations for integrating GenAI equitably in STEM education.

It’s a big ask, but she’s excited to see how the study shakes out and how students interact with the tools in an educational improvement. She anticipates her study will have broader applications as well; employees in industries like healthcare, engineering and finance are using AI, and her work may help implement more equitable GenAI use across a variety of industries.

“Understanding how learners interact with GenAI to solve problems, revise ideas or evaluate information can inform AI-enhanced workplace training, job simulations and continuing education,” she said.

Using AI as a collaborator, not a shortcut

Johnson, a professor of visual communication in the School of Communication, isn’t looking for AI to create art, but he thinks it can be an important tool in the creation process—one that helps human creators create even better art. His project will include:

  • Building a set of classroom-ready workflows that combine different industry tools like After Effects, Procreate Dreams and Blender with AI assistants for tasks such as storyboarding, ideation, cleanup, accessibility support
  • Running guided stories to compare baseline pipelines to AI-assisted pipelines, looking at time saved and quality
  • Creating open teaching modules that other instructors can adopt

In addition to creating usable, adaptable curriculum that teaches students to use AI to enhance their workflow—without replacing their work—and to improve accessibility standards, Johnson said this study will produce clear before and after case studies that show where AI can help and where it can’t.

“AI is changing creative industries, but the real skill isn’t pressing a button—it’s knowing how to direct, critique and refine AI as a collaborator,” Johnson said. “That’s what we’re teaching our students: how to keep authorship, ethics and creativity at the center.”

Johnson’s work also will take on the ethics of training and provenance that are a constant part of the conversation around using AI in art creation. His study will emphasize tools that respect artists’ rights and steer clear of imitating the styles of living artists without consent. He also will emphasize to students where AI fits into the work; it’s second in the process after they’ve initially created their work. It offers feedback; it doesn’t create the work.

Top photo: This is an image produced by ChatGPT illustrating Parekh’s research. I started with the prompt: “Can you make an image that has picture quality that shows a student with a reflection journal or interface showing their GenAI interaction and metacognitive responses (e.g., “Did this response help me?”)? It took a few rounds of changing the prompt, including telling AI twice to not put three hands into the image, to get to an image that reflects Parekh’s research and adheres to The NAU Review’s standards. 

Heidi Toth | NAU Communications
(928) 523-8737 | heidi.toth@nau.edu



Source link

Continue Reading

Trending