No question is too small when Kayla Chege, a Kansas high school student, uses artificial intelligence.
The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her and her younger sister’s birthday parties.
The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions.
Still, in interviews and a new study, teenagers say they increasingly interact with AI as if it were a companion capable of providing advice and friendship.
People are also reading…
“Everyone uses AI for everything now. It’s really taking over,” said Chege, who wonders how AI tools will affect her generation. “I think kids use AI to get out of thinking.”
For the past two years, concerns about cheating at school dominated the conversation around kids and AI. Yet, AI plays a much larger role in many of their lives. AI, teens say, became a go-to source for personal advice, emotional support, everyday decision-making and problem-solving.
Bruce Perry, 17, demonstrates using artificial intelligence software on his laptop July 15 in Russellville, Ark., during a break from summer camp.
‘AI is always available. It never gets bored with you’
More than 70% of teens use AI companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
The study defines AI companions as platforms designed to serve as “digital friends,” like Character.AI or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. Other popular sites, including ChatGPT and Claude, which mainly answer questions, are used the same way, the researchers say.
As the technology gets more sophisticated, teenagers and experts worry about AI’s potential to redefine human relationships and exacerbate crises of loneliness and youth mental health.
“AI is always available. It never gets bored with you. It’s never judgmental,” said Ganesh Nair, 18, of Arkansas. “When you’re talking to AI, you are always right. You’re always interesting. You are always emotionally justified.”
All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. He got spooked after a high school friend who relied on an “AI companion” for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship.
“That felt a little bit dystopian, that a computer generated the end to a real relationship,” Nair said. “It’s almost like we are allowing computers to replace our relationships with people.”
Perry demonstrates the possibilities of artificial intelligence July 15 by creating an AI companion on Character.AI.
How many teens use AI? New study stuns researchers
In the Common Sense Media survey, 31% of teens said their conversations with AI companions were “as satisfying or more satisfying” than talking with real friends. Though half of teens said they distrust AI’s advice, 33% discussed serious or important issues with AI instead of real people.
Perry shows his ChatGPT history July 15 at a coffee shop in Russellville, Ark.
Those findings are worrisome, says Michael Robb, the study’s lead author and head researcher at Common Sense, and should send a warning to parents, teachers and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are.
“It’s eye-opening,” Robb said. “When we set out to do this survey, we had no understanding of how many kids are actually using AI companions.” The study polled more than 1,000 teens across the United States in April and May.
Adolescence is a critical time for developing identity, social skills and independence, Robb said, and AI companions should complement — not replace — real-world interactions.
“If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else’s perspective,” he said, “they are not going to be adequately prepared in the real world.”
The nonprofit analyzed several popular AI companions in a “risk assessment,” finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice and offer harmful content. The group recommends that minors not use AI companions.
Perry poses for a portrait July 15 in Russellville, Ark., after discussing his use of artificial intelligence.
A concerning trend to teens and adults alike
Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a Character.AI chatbot.
“Parents really have no idea this is happening,” said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. “All of us are struck by how quickly this blew up.” Telzer is leading multiple studies on youth and AI, a new research area with limited data.
Telzer’s research found that children as young as 8 use generative AI and also found teens use AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults.
Perry demonstrates Character.AI, an artificial intelligence chatbot software that allows users to chat with popular characters such as EVE from Disney’s 2008 animated film “WALL-E.”
Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations.
“One of the concerns that comes up is that they no longer have trust in themselves to make a decision,” Telzer said. “They need feedback from AI before feeling like they can check off the box that an idea is OK or not.”
Bruce Perry, 17, of Arkansas says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class.
“If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil,” Perry said. He uses AI daily and asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster.
Scientists around the world are trying to understand the communication with animals. Now, the new Jeremy Coller Centre for Animal Sentience is launching a groundbreaking mission — using AI to decode animal emotions.
Perry says he feels fortunate that AI companions were not around when he was younger.
“I’m worried that kids could get lost in this,” Perry said. “I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend.”
Other teens agree, saying the issues with AI and its effect on children’s mental health are different from those of social media.
“Social media complemented the need people have to be seen, to be known, to meet new people,” Nair said. “I think AI complements another need that runs a lot deeper — our need for attachment and our need to feel emotions. It feeds off of that.”
“It’s the new addiction,” Nair added. “That’s how I see it.”
5 ways companies are incorporating AI ethics
5 ways companies are incorporating AI ethics
As more companies adopt generative artificial intelligence models, AI ethics is becoming increasingly important. Ethical guidelines to ensure the transparent, fair, and safe use of AI are evolving across industries, albeit slowly when compared to the fast-moving technology.
But thorny questions about equity and ethics may force companies to tap the brakes on development if they want to maintain consumer trust and buy-in.
KPMG’s 2024 Generative AI Consumer Trust Survey found that about half of consumers think there is not sufficient regulation of generative AI right now. The lack of oversight tracks with limited trust that institutions—particularly tech companies and the federal government—will ethically develop and implement AI, according to KPMG.
Within the tech industry, ethical initiatives have been set back by a lack of resources and leadership support, according to an article presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency. Layoffs at major corporations, including Amazon’s streaming platform Twitch, Microsoft, Google, and X, hit employees focused on ethical AI hard, leaving a vacuum.
While nearly 3 in 4 consumers say they trust organizations using GenAI in daily operations, confidence in AI varies between industries and functions. Just over half of consumers trust AI to deliver educational resources and personalized recommendations, compared to less than a third who trust it for investment advice and self-driving cars. Consumers are open to AI-driven restaurant recommendations, but not, it seems with their money or their life.
Clear concerns persist around the broader use of a technology that has elevated scams and deepfakes to a new level. The KPMG survey found that the biggest consumer concerns are the spread of misinformation, fake news, and biased content, as well as the proliferation of more sophisticated phishing scams and cybersecurity breaches. As AI grows more sophisticated, these concerns are likely to be amplified as more people may potentially be negatively affected—making ethical frameworks for approaching AI all the more essential.
That puts the onus to set ethical guardrails upon companies and lawmakers. In May 2024, Colorado became the first state to introduce legislation with provisions for consumer protection and accountability from companies and developers introducing AI systems used in education, financial services, and other critical, high-risk industries.
As other states evaluate similar legislation for consumer and employee protections, companies especially possess the in-the-weeds insight to address high-risk situations specific to their businesses. While consumers have set a high bar for companies’ responsible use of AI, the KPMG report also found that organizations can take concrete steps to garner and maintain public trust—education, clear communication and human oversight to catch errors, biases, or ethical concerns.
The reality is that the tension between proceeding cautiously to address ethical concerns and moving full speed ahead to capitalize on the competitive advantages of AI will continue to play out in the coming years.
Drata analyzed current events to identify five ways companies are ethically incorporating artificial intelligence in the workplace.
Actively supporting a culture of ethical decision-making
AI initiatives within the financial services industry can speed up innovation, but companies need to take care in protecting the financial system and customer information from criminals. To that end, JPMorgan Chase has a 200-person AI research group, including an ethics team to work on the company’s AI initiatives. The company ranks top on the Evident AI Index, which looks at banks’ AI readiness, including a top ranking for transparency in the responsible use of AI.
Development of risk assessment frameworks
The National Institute of Standards and Technology has developed an AI Risk Management Framework that helps companies better plan and grow their AI initiatives. The approach supports companies in identifying the risks posed by AI, defining and measuring ethical activity, and implementing AI systems with fairness, reliability, and transparency. The Vatican is even getting in on the action—it collaborated with the Markkula Center for Applied Ethics at Santa Clara University, a Catholic college in Silicon Valley, to recommend specific steps for companies to navigate AI technologies ethically.
Specialized training in responsible AI usage
Amazon Web Services has developed many tools and guides to help its employees think and act ethically as they develop AI applications. The Responsible AI course, a YouTube series produced by AWS Machine Learning University, serves as an introductory course that covers fairness criteria and methods for mitigating bias. Amazon’s SageMaker Clarify tool helps developers detect bias in AI model predictions.
Communication of AI mission and values
Companies that develop a mission statement around their AI practices clearly communicate their values and priorities to employees, customers, and other company stakeholders. Examples include Dell Technologies’ Principles for Ethical Artificial Intelligence and IBM’s AI ethics, which clarify their approach to AI application development and implementation, publicly setting guiding principles such as “respecting cultural norms, furthering social equality, and ensuring environmental sustainability.”
Implementing an AI ethics board
Companies can create AI ethics advisory boards to help them find and fix the ethical risks around AI tools, particularly systems that produce biased output because they were trained with biased or discriminatory data. SAP has had an AI Ethics Advisory Panel since 2018; it works on current ethical issues and looks ahead to identify potential future problems and solutions. Northeastern University has created an independent AI ethics advisory board to work with companies that prefer not to create their own.
Story editing by Jeff Inglis. Additional editing by Alizah Salario. Copy editing by Paris Close. Photo selection by Clarese Moller.