Connect with us

Tools & Platforms

Psychologist urges caution when turning to AI for emotional support

Published

on


Artificial intelligence is becoming an increasingly popular tool that can help with many areas of life — from solving work-related issues to assisting with research and everyday decision-making.

Its conversational style of communicating has also led some to turn to it for emotional support, with potentially dangerous consequences, as recent news stories have reported.

To understand the complexities around this topic, ASU News spoke with a psychologist at Arizona State University about the use of AI tools, such as ChatGPT, to navigate personal challenges.

While they are accessible and affordable, Clinical Associate Professor Matthew Meier says they shouldn’t completely replace professional psychotherapy.

Matthew Meier

Question: Are there any scientific studies or data tracking how many people are turning to ChatGPT or AI tools for emotional support or therapeutic conversations?

Answer: A number of surveys have examined the use of Large Language Models (LLMs), also referred to as chatbots or just AI, for psychological support. The largest study was conducted in 2025 by Kantar, a marketing data business, which surveyed 10,000 global AI users and found that just over half of the sample have used AI at least once for emotional or mental well-being. A recent study in the U.S. also found that nearly 50% have tried AI for psychological support.

AI use is more common among younger people. More than 70% of teens have used AI chatbots and more than 50% use AI regularly for emotional support.

Q: In what kinds of personal or emotional situations can AI reasonably offer support or guidance? What are some of the potential benefits of using AI to help navigate personal problems?

A: I think the biggest benefit is increased access to potential help. A chatbot is available 24 hours a day, at a moment’s notice and in any location that has internet access. And, at least for now, it’s free. Even the most expensive plans for paid access to AIs are cheaper than weekly psychotherapy sessions.

When used in conjunction with psychotherapy, chatbots can substantially increase the amount of work that occurs outside of therapy sessions. Chatbots can provide support for homework completion, encourage the use of new coping skills, assess symptoms and severity on an ongoing basis and be a source of support.

Whether someone is using a chatbot for emotional support and therapy or as an adjunct to formal psychotherapy, I would encourage people to use chatbots designed for mental health purposes, rather than generic AI platforms such as ChatGPT. Examples of mental health-related AIs include Earkick, Koko, Therabot, Youper and Wysa. However, the available research indicates that these chatbots are not yet sufficient to replace therapy from licensed professionals.

Q: What kinds of personal or psychological issues should only be handled by trained professionals? What are the risks in people turning to AI for help with deeper or more serious problems like trauma, for example?

A: Psychotherapy is a legally-protected term and a therapist must be licensed to provide psychotherapy. The purpose of licensure is to protect the public from harm by requiring a minimum standard of care for mental health services, by assuring that therapists have the level of training and experience necessary to safely provide psychological help to others. There is no such licensing process for AI chatbots, nor any standards to assure that psychological interventions are provided ethically or competently.

There are multiple pending lawsuits alleging harm from chatbots, including exposure to hypersexualized interactions, encouragement of violence toward parents and wrongful death by suicide. Almost 10% of chatbot users report receiving harmful or inappropriate responses. In February 2025, the American Psychological Association urged federal legislators and the Federal Trade Commission to regulate the use of AI for mental health support. At this time, no AI chatbot has been FDA-approved to diagnose or treat mental disorders.

Until there are safeguards in place, I do not recommend the use of AI chatbots as an alternative to human-provided therapy. However, as a component of mental health treatment from a licensed professional, chatbots can be helpful. I encourage my clients to use chatbots to help with problem-solving skills, work through negative thoughts, practice social skills, provide reminders and monitor symptoms. However, I also work with clients to not become overly reliant on a chatbot at the expense of human relationships and to have a safety plan to address crises.

Q: Given the barriers that often prevent people from seeking traditional therapy — like cost, access or stigma — could turning to ChatGPT still be better than nothing at all in helping people with painful struggles?

A: Much of the time, some help is better than no help, but this isn’t always the case. In some situations, especially involving trauma, the wrong intervention can make things worse. Or, bad or even unhelpful advice can be detrimental when in a crisis.

On the other hand, using AI isn’t really any different than using self-help books or talking to nonprofessionals. The important thing to keep in mind is to not solely rely on a chatbot for help. If someone is feeling overwhelmed or suicidal, professional help is available. Anyone can call 988, the suicide and crisis lifeline. Locally, the ASU Clinical Psychology Center offers in-person and telehealth psychology services on a sliding fee scale.

Q: There was a time when forming a relationship with a machine was considered the stuff of science fiction, yet today it is becoming surprisingly normalized. What are the concerns or consequences of people believing that robots are a normal and sufficient substitute for human connection?

A: I think people will increasingly turn to chatbots for emotional help, especially as AI becomes more sophisticated and is better able to mimic empathy and emotional connectedness.

What I am concerned about is when people choose to cut off human relationships and become overly reliant on a chatbot that is designed to collect personal information for profit. Right now, 30% of teens find AI conversations as satisfying or more satisfying than human conversations, and 6% spend more time with chatbots than with friends. These numbers are going to increase as chatbots become more sophisticated, and we need to protect children, teens and vulnerable adults from losing connections with people in the real world.

Q: Looking ahead, do you think AI will reshape the field of mental health and counseling, the same way it’s transforming so many other industries? Could it become part of mainstream therapy practice — or even possibly challenge or replace it?

A: AI is going to reshape most fields, including mental health. AI is relatively new, and with any new technology, we have to learn how to use it effectively and how to minimize negative consequences. Unfortunately, AI is advancing at such a fast pace that the field of psychology needs to dedicate significant effort to catch up. We need to determine how to protect people from the harmful aspects and establish standards of care, which will likely require mental health chatbots to become registered as medical devices. Once the guardrails are in place, I definitely think AI will become very integrated in clinical practice. I don’t think AI will replace therapists, but chatbots will increase access to mental health and could serve as an initial, frontline option to improve mental health.

Ways ChatGPT can help

ASU psychologist Matthew Meier says chatbots can provide support for homework completion and encourage the use of new coping skills, among other helpful tasks. 

Here’s an example using a real-life conversation with ChatGPT on July 29 about final exams.

Student: I have finals coming up and am really stressed out about them. What can I do?

ChatGPT: Totally understandable — finals can feel like a mental marathon with a fog machine. Here are practical, low-stress strategies to help you get through them without burning out:

1. Use a 3-part plan: Focus, rest, reset

FOCUS:
Make a short, realistic plan for each day (one to two subjects max). Use Pomodoro (25 min study/five min break) to avoid fatigue.Start with a topic you sort of understand — it’s motivating. 

REST:
Sleep is not optional — even six and a half to eight hours helps with memory. Take guilt-free 20–30 min walks or naps. Don’t cram before bed — you’re just stressing your brain out.

RESET:
Stuck? Switch subjects or teach it to your pet (or a wall). Do a one-minute reset: Close your eyes, slow your breath, unclench jaw. Even five deep breaths can help you feel less panicked.

2. Study smarter, not harder
Practice tests — rereading. Actively retrieving helps memory stick. Use flashcards, voice memos or even doodles if you’re visual. Teach it to someone else — you’ll quickly see what you know verses what you don’t.

3. Talk to yourself (nicely)
Stress thrives on inner trash talk. Replace “I’m going to fail” with “I’ve done hard things before. This is just another one.”

Bonus tip: Take a break before you break
Fifteen minutes of doing literally nothing productive (watching a silly video, lying on the floor) is not slacking — it’s stress relief. If you want, I can give you a daily study planner or a calming playlist. You’ve got this. You don’t need to be perfect — you just need to make steady progress.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Korea’s KIBO Unlocks US$130M AI Guarantee Program to Fuel Startup Growth and Global Competitiveness – KoreaTechDesk

Published

on


South Korea is intensifying support for its artificial intelligence (AI) ecosystem with a major policy finance initiative. Korea Technology Finance Corporation (KIBO) has committed ₩180 billion (approx. US$130 million) in new AI guarantees focus program for 2025, marking a decisive step in bridging financing gaps for startups and strengthening Korea’s competitiveness in the Asia-Pacific deep-tech race.

KIBO Allocates ₩180B to AI Sector Under BoostUp Strategy

On September 11, KIBO announced that it will supply ₩180 billion (~US$130 million) in guarantees for AI-related companies in 2025, with ₩90 billion (~US$65 million) to be executed in the second half of the year.

The measure aligns with the government’s designation of six “growth engines”AI, Bio, Contents & Culture, Defense, Energy, and Factory — that underpin Korea’s future competitiveness.

The initiative expands KIBO’s BoostUp Strategy, designed to channel targeted financial support into high-growth industries. AI has been singled out as a priority, and the new guarantee program is structured to address capital shortages that hinder early-stage and scaling firms.

Policy Background: Korea’s Six Growth Engines

The Korean government has identified six priority sectors to sustain economic momentum amid intensifying global competition:

  • A: Artificial Intelligence (AI)
  • B: Bio
  • C: Contents & Culture
  • D: Defense
  • E: Energy
  • F: Factory (manufacturing innovation)

KIBO’s AI-focused financing builds on this framework, aiming to strengthen startups developing or adopting frontier technologies. The program follows the May launch of an AI-specific guarantee product, part of efforts to reduce Korea’s gap in global competitiveness and support SMEs in accelerating AI adoption.

Stakeholder Perspective: KIBO’s Role in AI and Startup Growth

Kim Jong-ho, Chairman of KIBO, emphasized the alignment with national policy:

“KIBO is proactively supporting the government’s innovation-led growth strategy through dedicated AI financial products. We will continue to enhance Korea’s technology competitiveness and back the innovation growth of SMEs and startups as a dedicated policy finance institution.”

AI Guarantee Program and Benefits

The dedicated AI guarantee product, “AI·AX Competitiveness Enhancement Preferred Guarantee,” offers tailored incentives to firms developing or applying AI:

  • Guarantee coverage increased to a maximum of 95%
  • Fee reductions of up to 0.3 percentage points
  • Expanded special calculation provisions for SMEs

In addition, the Semiconductor Industry Special Guarantee targets fabless and semiconductor equipment companies, supporting R&D and localization efforts. This program offers up to ₩20 billion per firm, with the same enhanced guarantee ratios and fee reductions.

To ease financing costs, KIBO is also expanding collaboration with commercial banks, while overseas branches will operate guarantees and investment-attraction programs to help firms expand internationally.

Why the AI Guarantee Program Matters for Korea’s Startup Ecosystem

Basically, the program delivers multiple signals for Korea’s startup ecosystem:

  • Bridging capital gaps: By improving guarantee ratios and reducing fees, AI firms can access capital more flexibly, a persistent challenge in early-stage deep-tech.
  • Public–private synergy: Integration with banks and global market access programs underscores Korea’s hybrid financing model.
  • Global competitiveness: With AI investment accelerating in the U.S., China, and Europe, Korea’s initiative positions local startups to compete in frontier technologies and strengthens the pipeline for future unicorns.

The targeted support also extends Korea’s broader strategy to become a venture capital hub in Asia-Pacific, aligning with recent initiatives such as the government’s Fund-of-Funds commitments in AI and deep-tech.

Korea’s Forward-Looking Strategy for Global AI Competitiveness

In the end, KIBO’s ₩180 billion AI guarantee program reflects Korea’s determination to support startups at the intersection of finance and technology policy. By reducing barriers to capital and expanding tailored support, Korea aims to accelerate the growth of AI and semiconductor ventures that can anchor its next wave of global competitiveness.

Not only that but this move also shows global founders and investors that South Korea is reinforcing its role as a policy-driven innovation hub, where strategic finance mechanisms are tightly linked to national growth priorities and opportunities in frontier technology.

Stay Ahead in Korea’s Startup Scene
Get real-time insights, funding updates, and policy shifts shaping Korea’s innovation ecosystem.
➡️ Follow KoreaTechDesk on LinkedIn, X (Twitter), Threads, Bluesky, Telegram, Facebook, and WhatsApp Channel.



Source link

Continue Reading

Tools & Platforms

FTC launches inquiry into AI chatbots acting as companions and their effects on children

Published

on


(AP) – The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions.

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI.

The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots.

The move comes as a growing number of kids use AI chatbots for everything — from homework help to personal advice, emotional support and everyday decision-making. That’s despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

Character.AI said it is looking forward to “collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”

“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” the company said. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

Snap said its My AI chatbot is “transparent and clear about its capabilities and limitations.”

“We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community,” the company said in a statement.

Meta declined to comment on the inquiry and Alphabet, OpenAI and X.AI did not immediately respond to messages for comment.

OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.

Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.

Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.

Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.



Source link

Continue Reading

Tools & Platforms

Destination AI | IT Pro

Published

on

By


The AI revolution is here, and it’s accelerating fast. Don’t get left behind. In this video, TD SYNNEX reveals its Destination AI program—your strategic guide to navigating this rapidly expanding market. Learn how to transform your business and gain a competitive edge with our all-in-one program that offers everything from expert training and certification to ongoing sales support. Join us and harness the incredible power of AI to build a future-proof business.



Source link

Continue Reading

Trending