Connect with us

AI Insights

Using Generative AI for therapy might feel like a lifeline – but there’s danger in seeking certainty in a chatbot | Carly Dober

Published

on


Tran* sat across from me, phone in hand, scrolling. “I just wanted to make sure I didn’t say the wrong thing,” he explained, referring to a disagreement with his partner. “So I asked ChatGPT what I should say.”

He read the chatbot-generated message aloud. It was articulate, logical and composed – too composed. It didn’t sound like Tran. And it definitely didn’t sound like someone in the middle of a complex emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran’s contributing behaviours to the relationship strain that Tran and I had been discussing.

Like many others I’ve seen in therapy, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he’d downloaded ChatGPT on his phone “just to try it out”. What began as a curiosity soon became a daily habit, asking questions, drafting texts and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like “no one knew me better”.

His partner, on the other hand, began to feel as though she was talking to someone else entirely.

ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They’re often free, available 24/7 and can offer customised, detailed responses in real time. When you’re overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing.

But as a psychologist, I’m growing increasingly concerned about what I’m seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support.

AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is limiting access to trained professionals.

Clinician time is one of the scarcest resources in healthcare. It’s understandable (even expected) that people are looking for alternatives. But turning to a chatbot for emotional support isn’t without risk, especially when the lines between advice, reassurance and emotional dependence become blurred.

Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive “always-on” availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you’re asking again. It never challenges avoidance. It never says, “Let’s sit with this feeling for a moment, and practise the skills we have been working on.”

Tran often reworded prompts until the model gave him an answer that “felt right”. But this constant tailoring meant he wasn’t just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time that made it harder for him to trust his own instincts.

Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn’t protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused.

There’s also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to “hallucinations”, confident, polished answers that are completely untrue.

AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client’s voice trembles, or when their silence might say more than words.

This isn’t to say AI can’t have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care.

Tran wasn’t wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. But leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. “It just didn’t sound like you,” she later told him. It turned out it wasn’t.

She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them.

As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his.

Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn’t just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror.

For Tran, the shift wasn’t just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end he didn’t need a perfect response. He needed to believe that he could navigate life’s messiness with curiosity, courage and care – not perfect scripts.

*Name and identifying details changed to protect client confidentiality
Carly Dober is a psychologist living and working in Naarm/Melbourne
In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat 988lifeline.org



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

‘Just blame AI’: Trump hints at using artificial intelligence as shield for controversies

Published

on


US President Donald Trump has suggested that artificial intelligence could become a convenient scapegoat for political controversies, raising concerns about how the technology might be used to deflect accountability.

Speaking at the White House this week, Trump was asked about a viral video that appeared to show a bag being tossed out of a window at the presidential residence. Although officials had already explained it was routine maintenance, Trump dismissed the clip by saying: “That’s probably AI-generated.” He added that the White House windows are sealed and bulletproof, joking that even First Lady Melania Trump had complained about not being able to open them for fresh air.

But Trump went further, framing AI as both a threat and an excuse. “One of the problems we have with AI, it’s both good and bad. If something happens really bad, just blame AI,” he remarked, hinting that future scandals could be brushed aside as artificial fabrications.

This casual dismissal reflects a growing trend in Trump’s relationship with AI. In July, he reposted a fabricated video that falsely depicted former President Barack Obama being arrested in the Oval Office. He also admitted to being fooled by a machine-made life-long video montage of himself, from childhood to the present day.

Experts warn that as deepfake technology becomes increasingly sophisticated, it could destabilise politics by eroding public trust in what is real. If leaders begin to label inconvenient evidence as AI-generated, whether true or not, the result could be a dangerous precedent where accountability becomes optional and facts are endlessly disputed.

For Trump, AI appears to represent both risk and opportunity. While he acknowledges its ability to create “phony things,” he also seems to see it as a ready-made shield against future controversies. In his own words, the solution may be simple: “just blame AI.”



Source link

Continue Reading

AI Insights

The president blamed AI and embraced doing so. Is it becoming the new ‘fake news’?

Published

on


Artificial intelligence, apparently, is the new “fake news.”

Blaming AI is an increasingly popular strategy for politicians seeking to dodge responsibility for something embarrassing — among others. AI isn’t a person, after all. It can’t leak or file suit. It does make mistakes, a credibility problem that makes it hard to determine fact from fiction in the age of mis- and disinformation.

And when truth is hard to discern, the untruthful benefit, analysts say. The phenomenon is widely known as “the liar’s dividend.”

On Tuesday, President Donald Trump endorsed the practice. Asked about viral footage showing someone tossing something out an upper-story White House window, the president replied, “No, that’s probably AI” — after his press team had indicated to reporters that the video was real.

But Trump, known for insisting the truth is what he says it is, declared himself all in on the AI-blaming phenomenon.

“If something happens that’s really bad,” he told reporters, “maybe I’ll have to just blame AI.”

He’s not alone.

AI is getting blamed — sometimes fairly, sometimes not

On the same day in Caracas, Venezuelan Communications Minister Freddy Ñáñez questioned the veracity of a Trump administration video it said showed a U.S. strike on a vessel in Caribbean that targeted Venezuela’s Tren de Aragua gang and killed 11. A video of the strike posted to Truth Social shows a long, multi-engine speedboat at sea when a bright flash of light bursts over it. The boat is then briefly seen covered in flames.

“Based on the video provided, it is very likely that it was created using Artificial Intelligence,” Ñáñez said on his Telegram account, describing “almost cartoonish animation.”

Blaming AI can at times be a compliment. (“He’s like an AI-generated player,” tennis player Alexander Bublik said of his U.S. Open opponent Jannik Sinner’s talent on ESPN ). But when used by the powerful, the practice, experts say, can be dangerous.

Digital forensics expert Hany Farid warned for years about the growing capabilities of AI “deepfake” images, voices and video to aid in fraud or political disinformation campaigns, but there was always a deeper problem.

“I’ve always contended that the larger issue is that when you enter this world where anything can be fake, then nothing has to be real,” said Farid, a professor at the University of California, Berkeley. “You get to deny any reality because all you have to say is, ‘It’s a deepfake.’”

That wasn’t so a decade or two ago, he noted. Trump issued a rare apology (“if anyone was offended”) in 2016 for his comments about touching women without their consent on the notorious “Access Hollywood” tape. His opponent, Democrat Hillary Clinton, said she was wrong to call some of his supporters “a basket of deplorables.”

Toby Walsh, chief scientist and professor of AI at the University of New South Wales in Sydney, said blaming AI leads to problems not just in the digital world but the real world as well.

“It leads to a dark future where we no longer hold politicians (or anyone else) accountable,” Walsh said in an email. “”It used to be that if you were caught on tape saying something, you had to own it. This is no longer the case.”

Contemplating the ‘liar’s dividend’

Danielle K. Citron of the Boston University School of Law and Robert Chesney of the University of Texas foresaw the issue in research published in 2019. In it, they describe what they called “the liar’s dividend.”

“If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent—empowering authorities along the way,” they wrote in the California Law Review. “A skeptical public will be primed to doubt the authenticity of real audio and video evidence.”

Polling suggests many Americans are wary about AI. About half of U.S. adults said the increased use of AI in daily life made them feel “more concerned than excited,” according to a Pew Research Center poll from August 2024. Pew’s polling indicates that people have become more concerned about the increased use of AI in recent years.

Most U.S. adults appear to distrust AI-generated information when they know that’s the source, according to a Quinnipiac poll from April. About three-quarters said they could only trust the information generated by AI “some of the time” or “hardly ever.” In that poll, about 6 in 10 U.S. adults said they were “very concerned” about political leaders using AI to distribute fake or misleading information.

They have reason, and Trump has played a sizable role in muddying trust and truth.

Trump’s history of misinformation and even lies to suit his narrative predates AI. He’s famous for the use of “fake news,” a buzz term now widely known to denote skepticism about media reports. Leslie Stahl of CBS’ “60 Minutes” has said that Trump told her off camera in 2016 that he tries to “discredit” journalists so that when they report negative stories, they won’t be believed.

Trump’s claim on Tuesday that AI was behind the White House window video wasn’t his first attempt to blame AI. In 2023, he insisted that the anti-Trump Lincoln Project used AI in a video to make him “look bad.”

In the spot titled ” Feeble,” a female narrator taunts Trump. “Hey Donald … you’re weak. You seem unsteady. You need help getting around.” She questions his ”manhood,” accompanied by an image of two blue pills. The video continues with footage of Trump stumbling over words.

“The perverts and losers at the failed and once-disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden,” Trump posted on Truth Social.

The Lincoln Project told The Associated Press at the time that AI was not used in the spot.

___

Associated Press writers Ali Swenson in New York, Matt O’Brien in Providence, Rhode Island, Linley Sanders in Washington and Jorge Rueda in Caracas, Venezuela, contributed to this report.





Source link

Continue Reading

AI Insights

UAPB librarian leads session on artificial intelligence in STEM fields

Published

on


University of Arkansas at Pine Bluff librarian Shenise McGhee presented on AI-powered smart tools at the 2025 STEM Librarians South Conference hosted by the University of Texas at Arlington.

This annual conference, held virtually and in person, brings together librarians of science, technology, engineering and math from across the United States and beyond to exchange ideas, strategies and innovations in areas such as library instruction, reference services, collection development and outreach, according to a news release.

As a featured panelist during the virtual portion of the July conference, McGhee presented a session titled “Smart Tools: AI-Powered Pathways to STEM Student Success.”

She explored how advancements in artificial intelligence and machine learning are reshaping education, especially in STEM fields, where data-driven decision-making and adaptive learning are increasingly vital. She emphasized how STEM librarians can harness AI tools to enhance student learning, improve academic performance and promote equity in STEM education.

McGhee examined emerging technologies, including AI tutoring systems, intelligent learning platforms and personalized machine learning applications. She demonstrated how these tools can create inclusive learning environments by adapting instruction to meet individual student needs, delivering real-time feedback, automating instructional tasks and predicting student challenges before they arise.

Her presentation also emphasized the critical role of STEM librarians in supporting the ethical use of AI tools, teaching students how to engage with AI tools critically and effectively in their coursework by providing access to the digital resources that empower student success. Attendees were offered practical strategies, case studies and best practices to integrate AI into library services and student support initiatives.

In addition, McGhee spotlighted the UAPB STEM Academy, a five-to-six-week summer residential program designed to prepare incoming STEM majors for the academic rigor of college and life on campus. She discussed how the library collaborates with other campus departments to support students through targeted library instruction and services that contribute to academic success.

“STEM librarians are uniquely positioned to guide students through the evolving AI-driven educational landscape,” McGhee said. “By integrating smart tools and inclusive practices, we not only improve outcomes, but we also empower students to thrive.”

For more information, visit:

John Brown Watson Memorial Library

STEM Academy

Home



Source link

Continue Reading

Trending