AI Research
Should You Use Artificial Intelligence (AI) as Your Therapist?

As the demand for therapists increases and artificial intelligence (AI) becomes more sophisticated, people are turning to large language models…
As the demand for therapists increases and artificial intelligence (AI) becomes more sophisticated, people are turning to large language models (LLMs) as therapeutic tools. But should you? Experts warn to proceed with caution.
People who may benefit from therapy and mental health care often face barriers to accessing it. For many, chatting with an AI program might be easier to access and more affordable than a human therapist. You can talk with the chatbot as often as you like and from anywhere, but mental health chatbots have limitations you should know about.
If you or someone you know is experiencing a mental health crisis, always reach out to a human resource and do not turn to AI for help. Dial the 988 crisis line for immediate mental health support via phone and text.
[READ: How to Find the Right Mental Health Counselor for You]
Human vs. AI Therapy
LLMs can learn patterns in language and replicate them, and AI can even be trained on different therapy techniques, such as cognitive behavioral therapy or CBT.
Benefits of human interaction
However, while AI may be able to learn language patterns, it is incapable of delivering psychotherapy or talk therapy because the core of therapy is a human-to-human interaction, says Dr. Dave Rabin, a board-certified psychiatrist and translational neuroscientist.
“Most therapy, like 90 percent, is just meeting a fellow human being where they are in that moment, and just making them feel heard and seen and not judged,” says Rabin.
AI lacks the ability to spot nuances in tone, behavior, body language and eye contact that Rabin says are essential to therapy.
Reinforcement vs. gentle challenges
The American Psychological Association (APA) seems to agree that at AI is not a substitute for human therapy. In February of 2025, APA met with federal regulators and urged legislators to put safeguards in place to protect people from AI chatbots that can affirm users in ways a trained therapist wouldn’t.
Although various AI models operate differently, some are trained to reinforce a user’s worldview or provide overly flattering statements, says Ryan K. McBain, senior policy researcher and adjunct professor of policy analysis at RAND School of Public Policy. That this can become a problematic feedback loop when a person may benefit from the gentle challenges that a therapist might provide.
Setting boundaries
Another distinction between human therapists and AI chatbots is the ability to set boundaries. While the design of the chat tool is smart enough to use language of a therapy style like CBT to engage with you, the chatbot isn’t going to ask you to stop talking and think about what it just said, says Dr. Haiyan Wang, medical director and psychiatrist at Neuro Wellness Spa in Torrance, California.
Instead, there is a financial incentive for many AI programs to keep you engaged.
Haiyan contrasts the 24/7 access to a therapy chatbot with a human therapist, where you have to set appointments. The appointment means a lot because it’s a commitment between the patient and therapist, and it allows both parties to set boundaries, she says.
[Read: How to Prepare for Your First Therapy Session.]
AI Therapy Effectiveness
Research on the effectiveness of AI therapy is very new. A 2025 study in the New England Journal of Medicine examined a therabot used for mental health treatment. It’s the first randomized controlled trial to show the effectiveness of an AI therapy bot for treating people with major depressive disorder, generalized anxiety disorder or those at a high risk of developing an eating disorder. While users in the trial gave the therabot high ratings, researchers concluded that more studies with larger groups are still needed to determine effectiveness.
A 2025 Psychiatry Online study evaluated chatbots powered by LLMs to see how AI responded when someone’s suicide risk was at various levels, from low to high. Researchers found that the bots were in line with expert judgment when it came to responding to very low and very high levels of suicide risk, but there were inconsistencies for risk levels between the two extremes.
Even with promising research, Haiyan has a very cautious attitude when it comes to using AI as therapy or encouraging clients to use it, because AI still cannot replace human therapy.
Rabin says that if you want someone to talk to because you’re feeling lonely, a chatbot might help. But if you’re having a serious mental health crisis or dealing with a mental health diagnosis, the AI bot or character isn’t going to be able to solve that.
[READ: 9 Daily Habits to Boost Your Mental Health: Simple Steps for Boosting Your Well-Being]
Risks of AI Therapy
Experts warn that there are real risks associated with using AI as therapy, especially with news of teens taking their lives after interacting with chatbots. In addition, a chatbot cannot provide a referral to a psychiatrist, prescribe medications or provide guidance for your specific mental health situation.
McBain, an author of the Psychiatry Online study, says his main concerns with AI therapy are:
— Unsafe guidance because some chatbots may provide instructions on self-harm, substance use or suicide
— Missed warning signs, such as ambiguous expressions of distress
— Privacy risks that come with sharing deeply personal information without understanding how data are stored and used
A study from the Association for Computer Machinery found that AI chatbots are not effective and can introduce biases and stigmas that could harm someone with mental health challenges. Researchers concluded that there are many concerns with the safety of AI therapy, and LLM is not a replacement for therapists.
“When you employ a machine to do something that a human is required to do, you really put people’s lives at risk and their health at risk, and it’s a huge problem,” says Rabin.
AI chatbots and children’s mental health
For parents with children dealing with a mental health concern, you may be worried about your child going to a chatbot for mental health guidance. If you know a child experiencing a mental health crisis, it’s important to get them professional human help immediately.
How Can AI Help with Mental Health?
It might not excel at providing therapy, but there is a role for AI in the mental health world. Some therapists use it to help with session note-taking and administrative tasks. Wang sees AI transcription during sessions as one of its biggest advantages because it allows the therapist to fully focus on interacting without having to shift focus for note-taking.
Rabin says AI is great at doing route prediction and response to signs of illness that come up. An example of this in action is using generative AI to detect when a person has abnormal biometrics or heart rate variability based on data collected from a wearable device. The ability to quickly detect when somebody’s highly stressed or about to have a panic attack, he says, can provide the ability for mental health professionals to intervene.
“AI chatbots are likely to work better with highly structured, skills-based techniques, like practicing behavioral techniques, journaling or guided breathing,” says McBain. That’s because the responses for these are easier to script and validate.
If you’re feeling lonely and just looking for interaction, an AI chatbot may be a source of engaging conversation. However, if you’re in need of mental health advice or you’re in the midst of a mental health crisis, you need to speak to an actual human. Reach out to your health care provider or dial 988 for help via text or phone call.
More from U.S. News
What Is Cognitive Behavioral Therapy?
How Foods and Drinks Affect Our Mental Health
Should You Use Artificial Intelligence (AI) as Your Therapist? originally appeared on usnews.com
AI Research
‘AI Learning Day’ spotlights smart campus and ecosystem co-creation

When artificial intelligence (AI) can help you retrieve literature, support your research, and even act as a “super assistant”, university education is undergoing a profound transformation.
On 9 September, XJTLU’s Centre for Knowledge and Information (CKI) hosted its third AI Learning Day, themed “AI-Empowered, Ecosystem-Co-created”. The event showcased the latest milestones of the University’s “Education + AI” strategy and offered in-depth discussions on the role of AI in higher education.
In her opening remarks, Professor Qiuling Chao, Vice President of XJTLU, said: “AI offers us an opportunity to rethink education, helping us create a learning environment that is fairer, more efficient and more personalised. I hope today’s event will inspire everyone to explore how AI technologies can be applied in your own practice.”
Professor Qiuling Chao
In his keynote speech, Professor Youmin Xi, Executive President of XJTLU, elaborated on the University’s vision for future universities. He stressed that future universities would evolve into human-AI symbiotic ecosystems, where learning would be centred on project-based co-creation and human-AI collaboration. The role of educators, he noted, would shift from transmitters of knowledge to mentors for both learning and life.
Professor Youmin Xi
At the event, Professor Xi’s digital twin, created by the XJTLU Virtual Engineering Centre in collaboration with the team led by Qilei Sun from the Academy of Artificial Intelligence, delivered Teachers’ Day greetings to all staff.
(Teachers’ Day message from President Xi’s digital twin)
“Education + AI” in diverse scenarios
This event also highlighted four case studies from different areas of the University. Dr Ling Xia from the Global Cultures and Languages Hub suggested that in the AI era, curricula should undergo de-skilling (assigning repetitive tasks to AI), re-skilling, and up-skilling, thereby enabling students to focus on in-depth learning in critical thinking and research methodologies.
Dr Xiangyun Lu from International Business School Suzhou (IBSS) demonstrated how AI teaching assistants and the University’s Junmou AI platform can offer students a customised and highly interactive learning experience, particularly for those facing challenges such as information overload and language barriers.
Dr Juan Li from the School of Science shared the concept of the “AI amplifier” for research. She explained that the “double amplifier” effect works in two stages: AI first amplifies students’ efficiency by automating tasks like literature searches and coding. These empowered students then become the second amplifier, freeing mentors from routine work so they can focus on high-level strategy. This human-AI partnership allows a small research team to achieve the output of a much larger one.
Jing Wang, Deputy Director of the XJTLU Learning Mall, showed how AI agents are already being used to support scheduling, meeting bookings, news updates and other administrative and learning tasks. She also announced that from this semester, all students would have access to the XIPU AI Agent platform.
Students and teachers are having a discussion at one of the booths
AI education system co-created by staff and students
The event’s AI interactive zone also drew significant attention from students and staff. From the Junmou AI platform to the E
-Support chatbot, and from AI-assisted creative design to 3D printing, 10 exhibition booths demonstrated the integration of AI across campus life.
These innovative applications sparked lively discussions and thoughtful reflections among participants. In an interview, Thomas Durham from IBSS noted that, although he had rarely used AI before, the event was highly inspiring and motivated him to explore its use in both professional and personal life. He also shared his perspective on AI’s role in learning, stating: “My expectation for the future of AI in education is that it should help students think critically. My worry is that AI’s convenience and efficiency might make students’ understanding too superficial, since AI does much of the hard work for them. Hopefully, critical thinking will still be preserved.”
Year One student Zifei Xu was particularly inspired by the interdisciplinary collaboration on display at the event, remarking that it offered her a glimpse of a more holistic and future-focused education.
Dr Xin Bi, XJTLU’s Chief Officer of Data and Director of the CKI, noted that, supported by robust digital infrastructure such as the Junmou AI platform, more than 26,000 students and 2,400 staff are already using the University’s AI platforms. XJTLU’s digital transformation is advancing from informatisation and digitisation towards intelligentisation, with AI expected to empower teaching, research and administration, and to help staff and students leap from knowledge to wisdom.
Dr Xin Bi
“Looking ahead, we will continue to advance the deep integration of AI in education, research, administration and services, building a data-driven intelligent operations centre and fostering a sustainable AI learning ecosystem,” said Dr Xin Bi.
By Qinru Liu
Edited by Patricia Pieterse
Translated by Xiangyin Han
AI Research
Vietnam plans to introduce Law on Artificial Intelligence

This information was announced by Minister of Science and Technology Nguyen Manh Hung at a conference organised by the Ho Chi Minh National Academy of Politics in coordination with the Ministry of Public Security, the Ministry of National Defense, and the Central Theoretical Council in Hanoi on September 15.
Minister of Science and Technology Nguyen Manh Hung. Photo: MST |
At the event, experts, businesses, and managers shared their ideas in two discussion sessions. The first session focused on AI power, risks and control, analysing both positive and negative aspects, affirming the need to exploit potential and control ethics, safety, security, and social risks.
In the second session, they discussed national AI development strategy, from vision to actions, a specific roadmap to make AI a pillar in Vietnam’s socioeconomic development.
They agreed that for AI to truly become a driving force for development, Vietnam needs a comprehensive strategy: data infrastructure, high-quality human resources, a complete legal framework, and a dynamic innovation ecosystem. More importantly, AI must be oriented to serve people, protect human rights, and strengthen national security in the digital age.
According to Minister Hung, Vietnam issued its first AI Strategy in 2021, but AI is a rapidly changing field, so the strategy needed to be updated.
By the end of this year, the country will have an updated version of the National AI Strategy and the AI Law. This is not only a legal framework, but also a declaration of national vision. AI must become the country’s intellectual infrastructure, serving the people, developing sustainably, and enhancing national competitiveness.
Regarding open AI technology, Hung emphasised that Vietnam is committed to developing and mastering digital technology, including AI, based on open standards and open-source code. This is also Vietnam’s strategy to develop and master Vietnamese technology, implementing the “Make in Vietnam” programme.
![]() |
Experts, businesses, and managers share their ideas at the conference. Photo: MST |
Regarding creating a domestic AI market, he said that without applications, there will be no market. Without a market, Vietnamese AI enterprises will remain small. Therefore, promoting AI applications in enterprises, in state agencies and key areas is the fastest way to develop AI and create Vietnamese AI enterprises.
“The government will spend more on AI, the Natif Technology Innovation Fund of the Ministry of Science and Technology will spend at least 40 per cent to support AI applications, issue vouchers for small and medium-sized enterprises using Vietnamese AI. The domestic market is the cradle to create Vietnamese AI enterprises,” he noted.
In terms of policy and institutions, he added that Vietnam will issue a national AI ethics code that is in line with international standards but suitable for Vietnamese practice, and at the same time develop an AI Law and an AI strategy with core principles including risk-based management, transparency and accountability, putting people at the center, encouraging domestic AI development, AI autonomy, using AI as a driving force for rapid and sustainable growth, and protecting digital sovereignty based on three pillars: data, infrastructure, and AI technology.
According to the MST, Vietnam’s AI development will have to be based on four important pillars: transparent institutions, modern infrastructure, high-quality human resources, and humane culture.
Time for Vietnam to make breakthroughs
Speaking at the workshop, Luong Tam Quang, Minister of Public Security, said that AI is considered one of the key technologies, a factor that can lead to changes in the global order.
![]() |
Luong Tam Quang, Minister of Public Security. Photo: MST |
He added that with the ability to promote economic growth, optimise production, improve healthcare, innovate education, and enhance social governance capacity, AI helps countries save costs, increase efficiency, and expand knowledge. It is also a resource, and a driving force to affirm the country’s position in the digital age.
According to Minister Quang, Vietnam’s potential for AI development is huge, and is expected to contribute about $79.3 billion, equivalent to 12 per cent of Vietnam’s GDP in 2030 if widely applied. Under the leadership of the Party, legal regulations for the development of AI have gradually taken shape.
Prof. Dr. Nguyen Xuan Thang, director of the Ho Chi Minh National Academy of Politics, and chairman of the Central Theoretical Council, said that AI is becoming an indispensable part in the process of establishing a new growth model and the operation, governance, and management of the country’s society and economy.
![]() |
Prof. Dr. Nguyen Xuan Thang, director of the Ho Chi Minh National Academy of Politics, and chairman of the Central Theoretical Council. Photo: MST |
However, to turn potential into reality, it requires the support of the entire ecosystem, from national strategies and policies to implementation in businesses, institutes, schools, and the community.
“AI cannot develop sustainably without responsibility, ethics, and a clear humanistic orientation. Technology is the tool, while humans are the goal and the deciding factor, because even if it possesses unlimited power as many people believe, AI is still a product created by humans,” Thang emphasised.
![]() |
FPT University and Dream Lab harness AI to cultivate startups
FPT University and Dream Lab on July 31 signed a MoU to launch a groundbreaking initiative aimed at building Vietnam’s most dynamic startup and entrepreneurial ecosystem for students. |
![]() |
Citi launches AI tools for employees in Vietnam
Citi has expanded the rollout of its generative AI tools to employees across key Asian markets, marking a significant step towards enhancing productivity and innovation. |
![]() |
AI boom drives data center surge in Southeast Asia
AI is fueling an unprecedented surge in data center demand that Southeast Asia is not yet ready to meet. |
AI Research
Philippine businesses slow to adopt AI, study finds – People Matters Global

Philippine businesses slow to adopt AI, study finds People Matters Global
Source link
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries