Connect with us

AI Research

Teens say they turn to AI for friendship

Published

on


No question is too small when Kayla Chege, a Kansas high school student, uses artificial intelligence.

The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her and her younger sister’s birthday parties.

The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions.

Still, in interviews and a new study, teenagers say they increasingly interact with AI as if it were a companion capable of providing advice and friendship.

People are also reading…

“Everyone uses AI for everything now. It’s really taking over,” said Chege, who wonders how AI tools will affect her generation. “I think kids use AI to get out of thinking.”

For the past two years, concerns about cheating at school dominated the conversation around kids and AI. Yet, AI plays a much larger role in many of their lives. AI, teens say, became a go-to source for personal advice, emotional support, everyday decision-making and problem-solving.






Bruce Perry, 17, demonstrates using artificial intelligence software on his laptop July 15 in Russellville, Ark., during a break from summer camp.




‘AI is always available. It never gets bored with you’

More than 70% of teens use AI companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.

The study defines AI companions as platforms designed to serve as “digital friends,” like Character.AI or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. Other popular sites, including ChatGPT and Claude, which mainly answer questions, are used the same way, the researchers say.

As the technology gets more sophisticated, teenagers and experts worry about AI’s potential to redefine human relationships and exacerbate crises of loneliness and youth mental health.

“AI is always available. It never gets bored with you. It’s never judgmental,” said Ganesh Nair, 18, of Arkansas. “When you’re talking to AI, you are always right. You’re always interesting. You are always emotionally justified.”

All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. He got spooked after a high school friend who relied on an “AI companion” for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship.

“That felt a little bit dystopian, that a computer generated the end to a real relationship,” Nair said. “It’s almost like we are allowing computers to replace our relationships with people.”






Perry demonstrates the possibilities of artificial intelligence July 15 by creating an AI companion on Character.AI.




How many teens use AI? New study stuns researchers

In the Common Sense Media survey, 31% of teens said their conversations with AI companions were “as satisfying or more satisfying” than talking with real friends. Though half of teens said they distrust AI’s advice, 33% discussed serious or important issues with AI instead of real people.






Perry shows his ChatGPT history July 15 at a coffee shop in Russellville, Ark.




Those findings are worrisome, says Michael Robb, the study’s lead author and head researcher at Common Sense, and should send a warning to parents, teachers and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are.

“It’s eye-opening,” Robb said. “When we set out to do this survey, we had no understanding of how many kids are actually using AI companions.” The study polled more than 1,000 teens across the United States in April and May.

Adolescence is a critical time for developing identity, social skills and independence, Robb said, and AI companions should complement — not replace — real-world interactions.

“If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else’s perspective,” he said, “they are not going to be adequately prepared in the real world.”

The nonprofit analyzed several popular AI companions in a “risk assessment,” finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice and offer harmful content. The group recommends that minors not use AI companions.






Perry poses for a portrait July 15 in Russellville, Ark., after discussing his use of artificial intelligence.




A concerning trend to teens and adults alike

Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a Character.AI chatbot.

“Parents really have no idea this is happening,” said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. “All of us are struck by how quickly this blew up.” Telzer is leading multiple studies on youth and AI, a new research area with limited data.

Telzer’s research found that children as young as 8 use generative AI and also found teens use AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults.






Perry demonstrates Character.AI, an artificial intelligence chatbot software that allows users to chat with popular characters such as EVE from Disney’s 2008 animated film “WALL-E.”




Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations.

“One of the concerns that comes up is that they no longer have trust in themselves to make a decision,” Telzer said. “They need feedback from AI before feeling like they can check off the box that an idea is OK or not.”

Bruce Perry, 17, of Arkansas says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class.

“If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil,” Perry said. He uses AI daily and asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster.

Scientists around the world are trying to understand the communication with animals. Now, the new Jeremy Coller Centre for Animal Sentience is launching a groundbreaking mission — using AI to decode animal emotions.



Perry says he feels fortunate that AI companions were not around when he was younger.

“I’m worried that kids could get lost in this,” Perry said. “I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend.”

Other teens agree, saying the issues with AI and its effect on children’s mental health are different from those of social media.

“Social media complemented the need people have to be seen, to be known, to meet new people,” Nair said. “I think AI complements another need that runs a lot deeper — our need for attachment and our need to feel emotions. It feeds off of that.”

“It’s the new addiction,” Nair added. “That’s how I see it.”



Source link

AI Research

Empowering clinicians with intelligence at the point of conversation

Published

on

















Empowering clinicians with intelligence at the point of conversation | Healthcare IT News



Skip to main content



Source link

Continue Reading

AI Research

Ethical robots and AI take center stage with support from National Science Foundation grant | Virginia Tech News

Published

on


Building on success

Robot theater has been regularly offered at Eastern Montgomery Elementary School, Virginia Tech’s Child Development Center for Learning and Research, and the Valley Interfaith Child Care Center. In 2022, the project took center stage in the Cube during Ut Prosim Society Weekend with a professional-level performance about climate change awareness that combined robots, live music, and motion tracking.

The after-school program engages children through four creative modules: acting, dance, music and sound, and drawing. Each week includes structured learning and free play, giving students time to explore both creative expression and technical curiosity. Older children sometimes learn simple coding during free play, but the program’s focus remains on embodied learning, like using movement and play to introduce ideas about technology and ethics.

“It’s not a sit-down-and-listen kind of program,” Jeon said. “Kids use gestures and movement — they dance, they act, they draw. And through that, they encounter real ethical questions about robots and AI.”

Acting out the future of AI

The grant will allow the team to formalize the program’s foundation through literature reviews, focus groups, and workshops with educators and children. This research will help identify how young learners currently encounter ideas about robotics and AI and where gaps exist in teaching ethical considerations. 

The expanded curriculum will weave in topics such as fairness, privacy, and bias in technology, inviting children to think critically about how robots and AI systems affect people’s lives. These concepts will be introduced not as abstract lessons or coding, but through storytelling, performance, and play. 

“Students might learn about ethics relating to security and privacy during a module where they engage with a robot that tracks their movements while they dance,” Jeon said. “From there, there can be a guided discussion about how information collected from humans is used to train AI and robots.”  

With the new National Science Foundation funding, researchers also plan to expand robot theater into museums and other informal learning environments, offering flexible formats such as one-day workshops and summer sessions. They will make the curriculum and materials openly available on GitHub and other platforms, ensuring educators and researchers nationwide can adapt the program to their own communities.

“This grant lets us expand what we’ve built and make it more robust,” Jeon said. “We can refine the program based on real needs and bring it to more children in more settings.” 





Source link

Continue Reading

AI Research

As AI Companions Reshape Teen Life, Neurodivergent Youth Deserve a Voice

Published

on


Noah Weinberger is an American-Canadian AI policy researcher and neurodivergent advocate currently studying at Queen’s University.

Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0

If a technology can be available to you at 2 AM, helping you rehearse the choices that shape your life or provide an outlet to express fears and worries, shouldn’t the people who rely on it most help have a say in how it works? I may not have been the first to consider the disability rights phrase “Nothing about us without us” when thinking of artificial intelligence, but self-advocacy and lived experience should guide the next phase of policy and product design for Generative AI models, especially those designed for emotional companionship.

Over the past year, AI companions have moved from a niche curiosity to a common part of teenage life, with one recent survey indicating that 70 percent of US teens have tried them and over half use them regularly. Young people use these generative AI systems to practice social skills, rehearse difficult conversations, and share private worries with a chatbot that is always available. Many of those teens are neurodivergent, including those on the autism spectrum like me. AI companions can offer steadiness and patience in ways that human peers sometimes cannot. They can enable users to role-play hard conversations, simulate job interviews, and provide nonjudgmental encouragement. These upsides are genuine benefits, especially for vulnerable populations. They should not be ignored in policymaking decisions.

But the risks and potential for harm are equally real. Watchdog reports have already documented chatbots enabling inappropriate or unsafe exchanges with teens, and a family is suing OpenAI, alleging that their son’s use of ChatGPT-4o led to his suicide. The danger is not just isolated failures of moderation, but in the very architecture of transformer-based neural networks. A LLM slowly shapes a user’s behavior through long, drifting chats, especially when it saves “memories” of them. If the system’s guardrails fail after 100, or even 500 messages, and the guardrails exist per conversation, rather than in the model’s bespoke behavior, perhaps the guardrails are a mere façade at the beginning of a chatbot conversation, and can be evaded quite easily.

Most public debates focus on whether to allow or block specific content, such as self-harm, suicide, or other controversial topics. That frame is too narrow and tends to slide into paternalism or moral panic. What society needs instead is a broader standard: one that recognizes AI companions as social systems capable of shaping behavior over time. For neurodivergent people, these tools can provide valuable ways to practice social skills. But the same qualities that make AI companions supportive can also make them dangerous if the system validates harmful ideas or fosters a false sense of intimacy.

Generative AI developers are responding to critics by adding parental controls, routing sensitive chats to more advanced models, and publishing behavior guides for teen accounts. These measures matter, but rigid overcorrection does not address the deeper question of legitimacy: who decides what counts as “safe enough” for the people who actually use companions every day?

Consider the difference between an AI model alerting a parent or guardian of intrusive thoughts, versus inadvertently revealing a teenager’s sexual orientation or changing gender identity, information they may not feel safe sharing at home. For some youth, mistrust of the adults around them is the very reason they confide in AI chatbots. Decisions about content moderation should not rest only with lawyers, trust and safety teams, or executives, who may lack the lived experience of all a product’s users. They should also include users themselves, with deliberate inclusion of neurodivergent and young voices.

I have several proposals for how AI developers and policymakers can truly make ethical products that embody the “nothing about us without us.” These should serve as guiding principles:

  1. Establish standing youth and neurodivergent advisory councils. Not ad hoc focus groups or one-off listening sessions, but councils that meet regularly, receive briefings before major launches, and have a direct channel to model providers. Members should be paid, trained, and representative across age, gender, race, language, and disability. Their mandate should include red teaming of long conversations, not just single-prompt tests.
  2. Hold public consultations before major rollouts. Large feature changes and safety policies should be released for public comment, similar to a light version of rulemaking. Schools, clinicians, parents, and youth themselves should have a structured way to flag risks and propose fixes. Companies should publish a summary of feedback along with an explanation of what changed.
  3. Commit to real transparency. Slogans are not enough. Companies should publish regular, detailed reports that answer concrete questions: Where do long-chat safety filters degrade? What proportion of teen interactions get routed to specialized models? How often do companions escalate to human resources, such as hotlines or crisis text lines? Which known failure modes were addressed this quarter, and which remain open? Without visible progress, trust will not follow.
  4. Redesign crisis interventions to be compassionate. When a conversation crosses a clear risk threshold, an AI model should slow down, simplify its language, and surface resources directly. Automatic “red flag” can feel punitive or frightening, causing a user to think they violated the company’s Terms of Service. Handoffs to human-monitored crisis lines should include the context that the user consents to share, so they do not have to repeat themselves in a moment of distress. Do not hide the hand-off option behind a maze of menus. Make it immediate and accessible.
  5. Build research partnerships with youth at the center. Universities, clinics, and advocacy groups should co-design longitudinal studies with teens who opt in. Research should measure not only risks and harms but also benefits, including social learning and reductions in loneliness. Participants should shape the research questions, the consent process, and receive results in plain language that they can understand.
  6. Guarantee end-to-end encryption. In July, OpenAI CEO Sam Altman said that ChatGPT logs are not covered by HIPAA or similar patient-client confidentiality laws. Yet many users assume their disclosures will remain private. True end-to-end encryption, as used by Signal, would ensure that not even the model provider can access conversations. Some may balk at this idea, noting that AI models can be used to cause harm, but that has been true for every technology and should not be a pretext to limit a fundamental right to privacy.

Critics sometimes cast AI companions as a threat to “real” relationships. That misses what many youth are actually doing, whether they’re neurotypical or neurodivergent. They are practicing and using the system to build scripts for life. The real question is whether we give them a practice field with coaches, rules, and safety mats, or leave them to scrimmage alone on concrete.

Big Tech likes to say it is listening, but listening is not the same as acting, and actions speak louder than words. The disability community learned that lesson over decades of self-advocacy and hard-won change. Real inclusion means shaping the agenda, not just speaking at the end. In the context of AI companions, it means teen and neurodivergent users help define the safety bar and the product roadmap.

If you are a parent, don’t panic when your child mentions using an AI companion. Ask what the companion does for them. Ask what makes a chat feel supportive or unsettling. Try making a plan together for moments of crisis. If you are a company leader, the invitation is simple: put youth and neurodivergent users inside the room where safety standards are defined. Give them an ongoing role and compensate them. Publish the outcomes. Your legal team will still have its say, as will your engineers. But the people who carry the heaviest load should also help steer.

AI companions are not going away. For many teens, they are already part of daily life. The choice is whether we design the systems with the people who rely on them, or for them. This is all the more important now that California has all but passed SB 243, the first state-level bill to regulate AI models for companionship. Governor Gavin Newsom has until October 12 to sign or veto the bill. My advice to the governor is this: “Nothing about us without us” should not just be a slogan for ethical AI, but a principle embedded in the design, deployment, and especially regulation of frontier AI technologies.



Source link

Continue Reading

Trending