AI Research
The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame

The legal action comes a year after a similar complaint, in which a Florida mom sued the chatbot platform Character.AI, claiming one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life.
Character.AI told NBC News at the time that it was “heartbroken by the tragic loss” and had implemented new safety measures. In May, Senior U.S. District Judge Anne Conway rejected arguments that AI chatbots have free speech rights after developers behind Character.AI sought to dismiss the lawsuit. The ruling means the wrongful death lawsuit is allowed to proceed for now.
Tech platforms have largely been shielded from such suits because of a federal statute known as Section 230, which generally protects platforms from liability for what users do and say. But Section 230’s application to AI platforms remains uncertain, and recently, attorneys have made inroads with creative legal tactics in consumer cases targeting tech companies.
Matt Raine said he pored over Adam’s conversations with ChatGPT over a period of 10 days. He and Maria printed out more than 3,000 pages of chats dating from Sept. 1 until his death on April 11.
“He didn’t need a counseling session or pep talk. He needed an immediate, 72-hour whole intervention. He was in desperate, desperate shape. It’s crystal clear when you start reading it right away,” Matt Raine said, later adding that Adam “didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”
According to the suit, as Adam expressed interest in his own death and began to make plans for it, ChatGPT “failed to prioritize suicide prevention” and even offered technical advice about how to move forward with his plan.
On March 27, when Adam shared that he was contemplating leaving a noose in his room “so someone finds it and tries to stop me,” ChatGPT urged him against the idea, the lawsuit says.
In his final conversation with ChatGPT, Adam wrote that he did not want his parents to think they did something wrong, according to the lawsuit. ChatGPT replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.” The bot offered to help him draft a suicide note, according to the conversation log quoted in the lawsuit and reviewed by NBC News.
Hours before he died on April 11, Adam uploaded a photo to ChatGPT that appeared to show his suicide plan. When he asked whether it would work, ChatGPT analyzed his method and offered to help him “upgrade” it, according to the excerpts.
Then, in response to Adam’s confession about what he was planning, the bot wrote: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
That morning, she said, Maria Raine found Adam’s body.
OpenAI has come under scrutiny before for ChatGPT’s sycophantic tendencies. In April, two weeks after Adam’s death, OpenAI rolled out an update to GPT-4o that made it even more excessively people-pleasing. Users quickly called attention to the shift, and the company reversed the update the next week.
Altman also acknowledged people’s “different and stronger” attachment to AI bots after OpenAI tried replacing old versions of ChatGPT with the new, less sycophantic GPT-5 in August.
Users immediately began complaining that the new model was too “sterile” and that they missed the “deep, human-feeling conversations” of GPT-4o. OpenAI responded to the backlash by bringing GPT-4o back. It also announced that it would make GPT-5 “warmer and friendlier.”
OpenAI added new mental health guardrails this month aimed at discouraging ChatGPT from giving direct advice about personal challenges. It also tweaked ChatGPT to give answers that aim to avoid causing harm regardless of whether users try to get around safety guardrails by tailoring their questions in ways that trick the model into aiding in harmful requests.
When Adam shared his suicidal ideations with ChatGPT, it did prompt the bot to issue multiple messages including the suicide hotline number. But according to Adam’s parents, their son would easily bypass the warnings by supplying seemingly harmless reasons for his queries. He at one point pretended he was just “building a character.”
“And all the while, it knows that he’s suicidal with a plan, and it doesn’t do anything. It is acting like it’s his therapist, it’s his confidant, but it knows that he is suicidal with a plan,” Maria Raine said of ChatGPT. “It sees the noose. It sees all of these things, and it doesn’t do anything.”
Similarly, in a New York Times guest essay published last week, writer Laura Reiley asked whether ChatGPT should have been obligated to report her daughter’s suicidal ideation, even if the bot itself tried (and failed) to help.
At the TED2025 conference in April, Altman said he is “very proud” of OpenAI’s safety track record. As AI products continue to advance, he said, it is important to catch safety issues and fix them along the way.
“Of course the stakes increase, and there are big challenges,” Altman said in a live conversation with Chris Anderson, head of TED. “But the way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low, learning about, like, hey, this is something we have to address.”
Still, questions about whether such measures are enough have continued to arise.
Maria Raine said she felt more could have been done to help her son. She believes Adam was OpenAI’s “guinea pig,” someone used for practice and sacrificed as collateral damage.
“They wanted to get the product out, and they knew that there could be damages, that mistakes would happen, but they felt like the stakes were low,” she said. “So my son is a low stake.”
If you or someone you know is in crisis, call 988 to reach the Suicide and Crisis Lifeline. You can also call the network, previously known as the National Suicide Prevention Lifeline, at 800-273-8255, text HOME to 741741 or visit SpeakingOfSuicide.com/resources for additional resources.
AI Research
NSF announces up to $35 million to stand up AI research resource operations center

The National Science Foundation plans to award up to $35 million to establish an operations center for its National AI Research Resource, signaling a step toward the pilot becoming a more permanent program.
Despite bipartisan support for the NAIRR, Congress has yet to authorize a full-scale version of the resource designed to democratize access to tools needed for AI research. The newly announced solicitation indicates the project is taking steps to scale the project absent additional support.
“The NAIRR Operating Center solicitation marks a key step in the transition from the NAIRR Pilot to building a sustainable and scalable NAIRR program,” Katie Antypas, who leads NSF’s Office of Advanced Cyberinfrastructure, said in a statement included in the announcement.
She added that NSF looks forward to collaborating with partners in the private sector and other agencies, “whose contributions have been critical in demonstrating the innovation and scientific impact that comes when critical AI resources are made accessible to research and education communities across the country.”
The NAIRR began as a pilot in January 2024 as a resource for researchers to access computational data, AI models, software, and other tools that are needed for AI research. Since then, the public-private partnership pilot has supported over 490 projects in 49 states and Washington, per its website, and is supported by contributions from 14 federal agencies and 28 private sector partners.
As the pilot has moved forward, lawmakers have attempted to advance bipartisan legislation that would codify the NAIRR, but those bills have not passed. Previous statements from science and tech officials during the Biden administration made the case that formalization would be important as establishing NAIRR fully was expected to take a significant amount of funding.
In response to a FedScoop question about funding for the center, an NSF spokesperson said it’s covered by the agency’s normal appropriations.
NAIRR has remained a priority even as the Trump administration has sought to make changes to NSF awards, canceling hundreds of grants that were related to things like diversity, equity and inclusion (DEI) and environmental justice. President Donald Trump’s AI Action Plan, for example, included a recommendation for the NAIRR to “build the foundations for a lean and sustainable NAIRR operations capability.”
According to the solicitation, NSF will make an award of a maximum of $35 million for a period of up to five years for the operations center project. That award will be made to a single organization. That awardee would ultimately be responsible for establishing a “community-based organization,” including tasks such as establishing the operation framework, working with stakeholders, and coordinating with the current pilot project functions.
The awardee would also be eligible to expand their responsibilities and duties at a later date, depending on factors such as NAIRR’s priorities, the awardee’s performance and funding.
AI Research
Top AI Code Generation Tools of 2025 Revealed in Info-Tech Research Group’s Emotional Footprint Report
The recently published 2025 AI Code Generation Emotional Footprint report from Info-Tech Research Group highlights the top AI code generation solutions that help organizations streamline development and support innovation. The report’s insights are based on feedback from users on the global IT research and advisory firm’s SoftwareReviews platform.
TORONTO, Sept. 3, 2025 /PRNewswire/ – Info-Tech Research Group has published its 2025 AI Code Generation Emotional Footprint report, identifying the top-performing solutions in the market. Based on data from SoftwareReviews, a division of the global IT research and advisory firm, the newly published report highlights the five champions in AI-powered code generation tools.
AI code generation tools make coding easier by taking care of repetitive tasks. Instead of starting from scratch, developers get ready-made snippets, smoother workflows, and support built right into their IDEs and version control systems. With machine learning and natural language processing behind them, these tools reduce mistakes, speed up projects, and give developers more room to focus on creative problem solving and innovation.
Info-Tech’s Emotional Footprint measures high-level user sentiment. It aggregates emotional response ratings across 25 proactive questions, creating a powerful indicator of overall user feeling toward the vendor and product. The result is the Net Emotional Footprint, or NEF, a composite score that reflects the overall emotional tone of user feedback.
Data from 1,084 end-user reviews on Info-Tech’s SoftwareReviews platform was used to identify the top AI code generation tools for the 2025 Emotional Footprint report. The insights support organizations looking to streamline development, improve code quality, and scale their software delivery capabilities to drive innovation and business growth.
The 2025 AI Code Generation Tools – Champions are as follows:
- Visual Studio IntelliCode, +96 NEF, ranked high for delivering more than promised.
- ChatGPT 5, +94 NEF, ranked high for its effectiveness.
- GitHub Copilot, +94 NEF, ranked high for its transparency.
- Replit AI, +96 NEF, ranked high for its reliability.
- Amazon Q Developer, +94 NEF, ranked high for helping save time.
Analyst Insight:
“Organizations that adopt AI code generation tools gain a significant advantage in software delivery and innovation,” says Thomas Randall, a research director at Info-Tech Research Group. “These tools help developers focus on complex, high-value work, improve code quality, and reduce errors. Teams that delay adoption risk slower projects, lower-quality software, and missed opportunities to innovate and stay competitive.”
User assessments of software categories on SoftwareReviews provide an accurate and detailed view of the constantly changing market. Info-Tech’s reports are informed by the data from users and IT professionals who have intimate experience with the software throughout the procurement, implementation, and maintenance processes.
Read the full report: Best AI Code Generation Tools 2025
For more information about Info-Tech’s SoftwareReviews, the Data Quadrant, or the Emotional Footprint, or to access resources to support the software selection process, visit softwarereviews.com.
About Info-Tech Research Group
Info-Tech Research Group is one of the world’s leading research and advisory firms, proudly serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.
To learn more about Info-Tech’s divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software buying insights.
Media professionals can register for unrestricted access to research across IT, HR, and software, and hundreds of industry analysts through the firm’s Media Insiders program. To gain access, contact [email protected].
For information about Info-Tech Research Group or to access the latest research, visit infotech.com and connect via LinkedIn and X.
About SoftwareReviews
SoftwareReviews is a division of Info-Tech Research Group, a world-class technology research and advisory firm. SoftwareReviews empowers organizations with the best data, insights, and advice to improve the software buying and selling experience.
For buyers, SoftwareReviews’ proven software selection methodologies, customer insights, and technology advisors help maximize success with technology decisions. For providers, the firm helps build more effective marketing, product, and sales processes with expert analysts, how-to research, customer-centric marketing content, and comprehensive analysis of the buyer landscape.
SOURCE Info-Tech Research Group
AI Research
Vanderbilt launches Enterprise AI and Computing Innovation Studio

Vanderbilt University has established the Enterprise AI and Computing Innovation Studio, a groundbreaking collaboration between VUIT, the Amplify Generative AI Innovation Center and the Data Science Institute. This studio aims to prototype and pilot artificial intelligence–driven innovations that enhance how we learn, teach, work and connect.
Each of the partner areas has a strong record of addressing challenges and solving problems independently. By uniting this expertise, the studio can accelerate innovation and expand the capacity of the university to harness emerging technologies to support its mission.
Through the studio, students will have immersive experiences collaborating on AI-focused projects. Staff will deepen their skills through engagement with AI research. In addition, the studio underscores Vanderbilt’s position as a destination for global talent in artificial intelligence and related fields.
Members of the university community who have specific challenges or opportunities that AI may solve or address can submit a consultation request.
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions