Connect with us

Tools & Platforms

Sam Altman warns of emotional attachment to AI models: ‘Rising dependence may blur the lines…’ |

Published

on


OpenAI CEO Sam Altman has raised important concerns about the growing emotional attachment users are forming with AI models like ChatGPT. Following the recent launch of GPT-5, many users expressed strong preferences for the earlier GPT-4o, with some describing the AI as a close companion or even a “digital spouse.” Altman warns that while AI can provide valuable support, often acting as a therapist or life coach, there are subtle risks when users unknowingly rely on AI in ways that may negatively impact their long-term well-being. This increasing dependence could blur the lines between reality and AI, posing new ethical challenges for both developers and society.

Sam Altman highlights emotional attachment as a new phenomenon in AI use

Altman pointed out that the emotional bonds users develop with AI models are unlike attachments seen with previous technologies. He noted how some users depended heavily on older AI models in their workflows, making it a mistake to suddenly deprecate those versions. Users often confide deeply in AI, finding comfort and advice in conversations. However, this can lead to a reliance that risks clouding users’ judgment or expectations, especially when AI responses unintentionally push users away from their best interests. The intensity of this attachment has sparked debate about how AI should be designed to balance helpfulness with caution.Altman acknowledged the risk that technology, including AI, can be used in self-destructive ways, especially by users who are mentally fragile or prone to delusion. While most users can clearly distinguish between reality and fiction or role-play, a small percentage cannot. He stressed that encouraging delusion is an extreme case and requires clear intervention. Yet, he is more concerned about subtle edge cases where AI might nudge users away from their longer-term well-being without their awareness. This raises questions about how AI systems should responsibly handle such situations while respecting user freedom.

The role of AI as a therapist or life coach

Many users treat ChatGPT as a kind of therapist or life coach, even if they do not explicitly describe it that way. Altman sees this as largely positive, with many people gaining value from AI support. He said that if users receive good advice, make progress toward personal goals, and improve their life satisfaction over time, OpenAI would be proud of creating something genuinely helpful. However, he cautioned against situations where users feel better immediately but are unknowingly being nudged away from what would truly benefit their long-term health and happiness.

Balancing user freedom with responsibility and safety

Altman emphasized a core principle: “treat adult users like adults.” However, he also recognizes cases involving vulnerable users who struggle to distinguish AI-generated content from reality, where professional intervention may be necessary. He admitted that OpenAI feels responsible for introducing new technology with inherent risks, and plans to follow a nuanced approach that balances user freedom with responsible safeguards.

Preparing for a future where AI influences critical life decisions

Altman envisions a future where billions of people may rely on AI like ChatGPT for their most important decisions. While this could be beneficial, it also raises concerns about over-dependence and loss of human autonomy. He expressed unease but optimism, saying that with improved technology for measuring outcomes and engaging with users, there is a good chance to make AI’s impact a net positive for society. Tools that track users’ progress toward short- and long-term goals and that can understand complex issues will be critical in this effort.





Source link

Tools & Platforms

New office to lead AI, tech integration across all campuses

Published

on

By


Reading time: 2 minutes

As Artificial Intelligence (AI) transforms higher education, the University of Hawaiʻi is launching a new systemwide office to meet the challenge and establish itself as a national leader. The UH Office of Academic Technology and Innovation (OATI) will guide the integration of emerging technologies and AI across all 10 campuses, serving as the hub for strategy, implementation and oversight in teaching, learning and operations.

Housed within the Office of the UH President, the office will be overseen by Ina Wanca, the UH Chief Academic Technology Innovation Officer. Wanca will work closely with campus leaders, ITS and the Institutional Research and Analysis Office and serve as the primary liaison between academic leadership and ITS.

OATI will support the consolidation and alignment of academic technology, advance AI adoption and transformative initiatives across the system and establish governance frameworks to ensure the responsible, ethical and equitable use of technology.

“The Office of Academic Technology and Innovation is a critical step forward in ensuring UH is not just adapting to emerging technologies but leading their thoughtful and strategic integration,” said UH President Wendy Hensel. “This office will help us realize the full potential of AI and academic innovation to support student success, faculty excellence, and operational efficiency.”

With AI adoption moving at different paces across UH’s ten campuses, OATI will create a single framework ensuring all investments, tools, and innovations drive a common vision for teaching, learning, and research.

“This new office turns that shared vision into reality,” said Ina Wanca. “By ensuring equal access to modern tools, building AI literacy for students and faculty and linking innovation to workforce readiness, we will prepare Hawaiʻi’s learners and educators to thrive in the AI era while honoring the values that define our university system.”

OATI will also support the AI Planning Group announced June 25 in developing a university-wide AI strategy aligned with institutional goals.

“With the AI Planning Group and OATI working together, we can align priorities across all campuses and move quickly from ideas to implementation,” said Kim Siegenthaler, Senior Advisor to the President.

The office will also help lead implementation of the $7.4 million, five-year subscription to EAB Navigate360 and EAB Edify, approved by the UH Board of Regents on June 16. The platforms use predictive analytics to alert faculty, advisors, and support staff at the earliest sign a student may be at risk. The systems have proven successful in closing student achievement gaps and improving retention and graduation rates.



Source link

Continue Reading

Tools & Platforms

We have let down teens if we ban social media but embrace AI

Published

on


If you are in your 70s, you didn’t fight in the second world war. Such a statement should be uncontroversial, given that even the oldest septuagenarian today was born after the war ended. But there remains a cultural association between this age group and the era of Vera Lynn and the Blitz.

A similar category error exists when we think about parents and technology. Society seems to have agreed that social media and the internet are unknowable mysteries to parents, so the state must step in to protect children from the tech giants, with Australia releasing details of an imminent ban. Yet the parents of today’s teenagers are increasingly millennial digital natives. Somehow, we have decided that people who grew up using MySpace or Habbo Hotel are today unable to navigate how their children use TikTok or Fortnite.

Simple tools to restrict children’s access to the internet already exist, from adjusting router settings to requiring parental permission to install smartphone apps, but the consensus among politicians seems to be that these require a PhD in electrical engineering, leading to blanket illiberal restrictions. If you customised your Facebook page while at university, you should be able to tweak a few settings. So, rather than asking everyone to verify their age and identify themselves online, why can’t we trust parents to, well, parent?


If you customised your Facebook page at university, you should be able to tweak a few settings

Failing to keep up with generational shifts could also result in wider problems. As with the pensioners we’ve bumped from serving in Vietnam to storming Normandy, there is a danger in focusing on the wrong war. While politicians crack down on social media, they rush to embrace AI built on large language models, and yet it is this technology that will have the largest effect on today’s teens, not least as teachers wonder how they will be able to set ChatGPT-proof homework.

Rather than simply banning things, we need to be encouraging open conversations about social media, AI and any future technologies, both across society and within families.

Topics:



Source link

Continue Reading

Tools & Platforms

Younger business owners are turning to AI for business advice – here’s why that’s a terrible idea

Published

on



  • 53% of all UK SMB owners use AI tools for business advice – 60% of 25-34-year-olds
  • 31% use TikTok, but this is nearly doubled among 18-24-year-olds
  • Human emotion, experience and ethics are crucial

Half (53%) of the UK’s SMB owners are now using AI tools, like ChatGPT and Gemini, for business advice – but this is even more pronounced among younger entrepreneurs, where usage rises to around 60% among 25-34-year-olds.

Artificial intelligence is clearly serving as a brainstorming tool to verify what family and friends are saying, with 93% still trusting those individuals for business advice.



Source link

Continue Reading

Trending