AI Insights
AI is already in school. Some lawmakers say Delaware needs to keep up

3 predictions for Week 2 of the Delaware high school football season
Reporter Brandon Holveck breaks down three of the top games of the weekend.
- Delaware Sen. Lisa Blunt Rochester co-introduced a bill to help states develop K-12 academic standards for artificial intelligence.
- Indian River High School is accepting nominations for its alumni hall of fame to honor graduates for their achievements and service.
- A tuition-free summer program, Horizons Tower Hill, aims to close academic and social gaps for Wilmington students.
The school year is humming right along, and some Delaware leaders are hoping it can stay ahead.
Down in Washington, D.C., Sen. Lisa Blunt Rochester has joined other lawmakers to support strengthening state standards in bringing artificial intelligence and digital literacy to the classroom.
Meanwhile, Indian River High School is looking far and wide for alumni talent to fill its hall of fame.
In this weekly roundup, we’ll catch you up on some education updates you may have missed.
(Did we miss another good education story? Let me know: kepowers@gannett.com.)
Sen. Blunt Rochester helps introduce bill to bring AI to classroom
It isn’t news that AI has reached the classroom.
But Delaware’s U.S. Sen. Lisa Blunt Rochester, alongside Republican Sen. Jon Husted from Ohio, introduced a bill on Sept. 9 looking to support states in “developing academic standards for artificial intelligence for K-12 students.” Recommending Artificial Intelligence Standards in Education, or the RAISE Act, looks to gives states the authority to develop their own AI curriculum and build competency, according to a press release.
The act comes in response to a growing need for AI and technology literacy in schools, so American – and Delawarean – students can compete. The next generation needs a growing academic foundation to “leave the classroom with the skills and knowledge they need to succeed in our changing economy,” Blunt Rochester’s press release reads.
The Elementary and Secondary Education Act is the federal law that already requires states to set learning standards for core subjects, like math, reading and science. This legislation would encourage the creation of similar standards for artificial intelligence, according to lawmakers.
Indian River High School needs you to fill its hall of fame
As the school district put it: “Every community has its quiet heroes.”
And Indian River High School is hoping to find them. The “Indian River High School Hall of Fame” looks to honor alumni who embody the values of citizenship, leadership and service, with “not only a recognition of personal achievement but also a reminder to current students that greatness can begin right here in Frankford.”
Previous inductees have included legislators and educators, as well as artists, athletes and civic leaders who helpped build communities, now spread out across the country.
Nominations for the 2025-26 hall of fame are now being accepted.
To be eligible, nominees must have graduated from Indian River High School at least 10 years prior to nomination. The selection committee looks at professional accomplishments, recognition in the nominee’s chosen field, as well as a demonstrated commitment to service and to the IRHS community.
The deadline to submit is Oct. 17, and more information can be found online.
ICYMI: Growing Horizons Tower Hill looks to fill gaps school can’t reach
Gemelle John became the executive director of “Horizons Tower Hill,” Delaware’s first affiliate of a national nonprofit, in 2023.
That’s not summer school, and it’s not camp. But a budding program fit with core classes, reading specialists and field trips alike, Horizons looks to offer a tuition-free, six-week program to support Wilmington students in the summer.
“I think about it pragmatically, so literally doing things to close the gap,” John said in a recent Q&A with Delaware Online/The News Journal.
“Sending their kids to all sorts of experiences, traveling, all that ‘social capital’ is gained by families with a lot of resources – we’re closing the gap by providing field trips and enrichment.”
It all started with about 15 rising first graders. Now, this summer, nearly 40 rising first-to-third graders were learning with educators, reading specialists and dozens of Tower Hill student volunteers. The program looks to build relationships with teachers and families alike.
John has just one school partnership, so far, in EastSide Charter – but she hopes to see much more on the horizon.
“If there are schools in and around the city who are interested, they can reach out to me,” she said. “We are currently vetting for a second school. We’re actively looking for that sometime this fall. That school should understand we are looking to create a long-term relationship.”
Got a tip? Contact Kelly Powers at kepowers@gannett.com.
AI Insights
Should You Use ChatGPT For Therapy?

Sharing how you’re feeling can be frightening. Friends and family can judge, and therapists can be expensive and hard to come by, which is why some people are turning to ChatGPT for help with their mental health.
While some credit the AI service with saving their life, others say the lack of regulation around it can pose dangers. Psychology experts from Northeastern said there are safety and privacy issues posed by someone opening up to artificial intelligence chatbots like ChatGPT.
“AI is really exciting as a new tool that has a lot of promise, and I think there’s going to be a lot of applications for psychological service delivery,” says Jessica Hoffman, a professor of applied psychology at Northeastern University. “It’s exciting to see how things are unfolding and to explore the potential for supporting psychologists and mental health providers in our work.
“But when I think about the current state of affairs, I have significant concerns about the limits of ChatGPT for providing psychological services. There are real safety concerns that people need to be aware of. ChatGPT is not a trained therapist. It doesn’t abide by the legal and ethical obligations that mental health service providers are working with. I have concerns about safety and people’s well-being when they’re turning to ChatGPT as their sole provider.”
The cons
It’s easy to see the appeal of confiding in a chatbot. Northeastern experts say therapists can be costly and it’s difficult to find one.
“There’s a shortage of professionals,” Hoffman says. “There are barriers with insurance. There are real issues in rural areas where there’s even more of a shortage. It does make it easier to be able to just reach out to the computer and get some support.”
Chatbots can also serve as a listening ear.
“People are lonely,” says Josephine Au, an assistant clinical professor of applied psychology at Northeastern University. “People are not just turning to (general purpose generative AI tools like) ChatGPT for therapy. They’re also looking for companionship, so sometimes it just naturally evolves into a therapy-like conversation. Other times they use these tools more explicitly as a substitute for therapy.”
However, Au says these forms of artificial intelligence are not designed to be therapeutic. In fact, these models are often set up to validate the user’s thoughts, a problem that poses a serious risk for those dealing with delusions or suicidal thoughts.
There have been cases of people who died by suicide after getting guidance on how to do so from AI chatbots, one of which prompted a lawsuit. There are also increasing reports of hospitalizations due to “AI psychosis,” where people have mental health episodes triggered by these chatbots. OpenAI added more guardrails to ChatGPT after finding it was encouraging unhealthy behavior.
The American Psychological Association warned against using AI chatbots for mental health support. Research from Northeastern found that people can bypass the language model’s guardrails and use it to get details on how to harm themselves or even die by suicide.
“I don’t think it’s a good idea at all for people to rely on non-therapeutic platforms as a form of therapy,” Au says. “We’re talking about interactive tools that are designed to be agreeable and validating. There are risks to like what kind of data is generated through that kind of conversation pattern. A lot of the LLM tools are designed to be agreeable and can reinforce some problematic beliefs about oneself.”
This is especially pertinent when it comes to diagnosis. Au says people might think they have a certain condition, ask ChatGPT about it, and get a “diagnosis” from their own self-reported symptoms thanks to the way the model works.
But Northeastern experts say a number of factors go into getting a diagnosis, such as examining a patient’s body language and looking at their life more holistically as they develop a relationship with a patient. These are things AI cannot do.
“It feels like a slippery slope,” says Joshua Curtiss, an assistant professor of applied psychology at Northeastern University. “If I tell ChatGPT I have five of these nine depression symptoms and it will sort of say, ‘OK, sounds like you have depression’ and end there. What the human diagnostician would do is a structured clinical assessment. They’ll ask lots of follow-up questions about examples to support (you’ve had) each symptom for the time criteria that you’re supposed to have it to, and that the aggregate of all these symptoms falls underneath a certain mental health disorder. The clinician might ask the patient to provide examples (to) justify the fact that this is having a severe level of interference in your life, like how many hours out of your job is it taking? That human element might not necessarily be entrenched in the generative AI mindset.”
Then there are the privacy concerns. Clinicians are bound by HIPAA, but chatbots don’t have the same restrictions when it comes to protecting the personal information people might share with it. OpenAI CEO Sam Altman said there is no legal confidentiality for people using ChatGPT.
“The guardrails are not secure for the kind of sensitive information that’s being revealed,” Hoffman says of people using AI as therapists. “People need to recognize where their information is going and what’s going to happen to that information. Something that I’m very aware of as I think about training psychologists at Northeastern is really making sure that students are aware of the sensitive information they’re going to be getting as they work with people, and making sure that they don’t put that in any of that information into ChatGPT because you just don’t know where that information is going to go. We really have to be very aware of how we’re training our students to use ChatGPT. This is like a really big issue in the practice of psychology.”
The pros
While artificial intelligence poses risk when being used by patients, Northeastern experts say certain models could be helpful to clinicians when trained the right way and with the proper privacy safeguards in place.
Curtiss, a member of Northeastern’s Institute for Cognitive and Brain Health, says he has done a lot of work with artificial intelligence, specifically machine learning. He has research out now that found that these types of models can be used to help predict treatment outcomes when it comes to certain mental health disorders.
“I use machine learning a lot with predictive modeling where the user has more say in what’s going on as opposed to large language models like the common ones we’re all using,” Curtiss says.
Northeastern’s Institute for Cognitive and Brain Health is partnering with experiential AI partners to see if they can develop therapeutic tools.
Hoffman says she also sees the potential for clinicians to use artificial intelligence where appropriate in order to improve their practice.
“It could be helpful for assessment,” Hoffman says. “It could be a helpful tool that clinicians use to help with intakes and with assessment to help guide more personalized plans for therapy. But it’s not automatic. It needs to have the trained clinician providing oversight and it needs to be done on a safe, secure platform.”
For patients, Northeastern experts say there are some positive uses of chatbots that don’t require using them as a therapist. For example, Au says these tools can help people summarize their thoughts or come up with ways to continue certain practices their clinicians suggest for their health. Hoffman suggests it could also be a way for people to connect with providers.
But overall, experts say it’s better to find a therapist than lean on chatbots not designed to serve as therapeutic tools.
“I have a lot of hopes, even though I also have a lot of worries,” Au says. “The leading agents in commercialization of and monetization of mental health care tools are people, primarily people in tech, venture capitalists and researchers who lack clinical experience and not practicing clinicians who understand what psychotherapy is as well as patients. There are users who claim that these tools have been really helpful for them (to) reduce the sense of isolation and loneliness. I remain skeptical about the authenticity of these because some of this could be driven by money.”
Society & Culture
Recent Stories
AI Insights
The $2 trillion AI revolution: How smart factories are rewriting the rules – Smart Industry
AI Insights
Future-proofing the enterprise: Cultivating 3 essential leadership skills for the agentic AI era

The agentic AI era is here, and it will reshape how businesses operate. The question is: Is your leadership team equipped to handle it? How quickly can you equip your leadership team and workforce with the capabilities to harness their power?
This isn’t just about integrating more automation; it’s about leading organizations through a paradigm shift where autonomous AI agents will increasingly define workflows, decision-making and competitive advantage. This necessitates a strategic focus on three core leadership skills, designed not just to future-proof individual careers, but to ensure the enduring resilience, transformation and innovative capacity of your entire enterprise.
1. The “agent architect”: Mastering prompt engineering and strategic oversight
The Challenge: In the traditional IT landscape, leadership defines requirements and teams build to spec. In the Agentic Era, the “spec” becomes a high-level goal, and the “build” is largely executed by autonomous agents. Without effective guidance, these agents can stray, underperform or even introduce new risks.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi