AI Research
Brown University launches $20m AI institute for mental health

“Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it’s interacting with, along with a deep causal understanding of the world and how the system’s own behavior affects that world,” said Ellie Pavlick, an associate professor of computer science at Brown who will lead the ARIA collaboration. “Mental health is a high stakes setting that embodies all the hardest problems facing AI today. That’s why we’re excited to tackle this and figure out what it takes to get these things absolutely right.”
With the mental health care needs becoming more critical amid widespread cuts to funding and staffing, and millions of Americans unable to access adequate therapy, many people are turning to AI as a form of therapy. More than one in five Americans lives with a mood, anxiety, or substance abuse disorder, according to the National Institutes of Health. While there are effective treatments, they can be costly to access. AI has the potential to break through these barriers, Pavlick said.
But experts have concerns: Risks posed by currently unregulated chatbots, like ChatGPT, include misdiagnoses, privacy violations, inappropriate treatments, and exploitation.
“Part of the work of the institute will be to understand what forms this technology could take, which types of systems could work, and which shouldn’t exist,” said Pavlick.
Pavlick acknowledged that AI is already being used in the mental health space, with some bots specifically designed for mental health support in a field of more generic chatbots.
“A key priority for ARIA is to think critically about this technology that is already being put out into the world: what is working, what is not, how do we retain the benefits without causing near- or long-term harms to individuals or society,” said Pavlick.
The research institute will be invested in AI and mental health care long term, Pavlick said, and will “stubbornly resist pressure to describe what this could or should look like right away.”
It will bring together experts from across the nation who specialize in computer science, machine learning, cognitive and behavioral science, law, philosophy and education.
The news comes months after the Trump administration confirmed to the Globe that Brown would face a $510 million federal funding freeze over what the administration calls its effort to hold elite universities accountable for persistent antisemitism on campus. The university has yet to receive any notification of the Trump administration’s efforts to cut funding.
Even as some grants at Brown have been terminated, Brown researchers have continued to pursue federal funding opportunities, university leaders say, including to help build the institute.
During its first year, researchers will hear from a range of stakeholders — including mental health professionals, patients, policy makers, and the general public — which will drive the institute’s vision for what the ideal technology might look like.
ARIA is one of five national AI institutes that will receive a total of $100 million in funding that the National Science Foundation announced on Tuesday in partnership with Capital One and Intel. Capital One is contributing $1 million over five years to support the research efforts, according to a press release.
The public-private investment aligns with the White House AI Action Plan, a national initiative released July 23 designed to sustain and enhance America’s global AI leadership. The plan identifies more than 90 federal policy actions, including exporting American AI and promoting the rapid buildout of data centers.
“Winning the AI Race is non-negotiable,” said Secretary of State Marco Rubio, who earlier this month was targeted by an imposter who reportedly used AI to impersonate him and contact foreign and US officials.
In a sweeping effort, President Trump signed three executive orders last week when releasing the AI plan, which is meant to “remove red tape and onerous regulation.” One executive order barred the federal government from buying AI tools that are considered ideologically biased.
Many of the changes in the White House’s plan would benefit tech giants. Since OpenAI publicly released ChatGPT in late 2022, tech companies have raced to produce their own versions of the technology, including Google, Microsoft, and Meta. While the companies jockey for access to computing power, usually from huge data centers filled with computers, they are also stressing local resources.
“Artificial intelligence is key to strengthening our workforce and boosting U.S. competitiveness,” said Brian Stone, who is performing the duties of the National Science Foundation director. “Through the National AI Research Institutes, we are turning cutting-edge ideas and research into real-world solutions and preparing Americans to lead in the technologies and jobs of the future.”
The institute’s work will include an education and workforce development program for K-12 students. The team will work with the Bootstrap program, a computer science curriculum developed at Brown, to support evidence-based practices for building new AI curricula and training for K-12 teachers, according to a press release.
“These are extremely hard problems in AI in general that happen to have a particularly pointed use case in mental health,” said Pavlick. “By working toward answers to these questions, we’ll work toward making AI that’s beneficial to all.”
Alexa Gagosz can be reached at alexa.gagosz@globe.com. Follow her @alexagagosz and on Instagram @AlexaGagosz.
AI Research
Maine police can’t investigate AI-generated child sexual abuse images

A Maine man went to watch a children’s soccer game. He snapped photos of kids playing. Then he went home and used artificial intelligence to take the otherwise innocuous pictures and turn them into sexually explicit images.
Police know who he is. But there is nothing they could do because the images are legal to have under state law, according to Maine State Police Lt. Jason Richards, who is in charge of the Computer Crimes Unit.
While child sexual abuse material has been illegal for decades under both federal and state law, the rapid development of generative AI — which uses models to create new content based on user prompts — means Maine’s definition of those images has lagged behind other states. Lawmakers here attempted to address the proliferating problem this year but took only a partial step.
“I’m very concerned that we have this out there, this new way of exploiting children, and we don’t yet have a protection for that,” Richards said.
Two years ago, it was easy to discern when a piece of material had been produced by AI, he said. It’s now hard to tell without extensive experience. In some instances, it can take a fully clothed picture of a child and make the child appear naked in an image known as a “deepfake.” People also train AI on child sexual abuse materials that are already online.
Nationally, the rise of AI-generated child sexual abuse material is a concern. At the end of last year, the National Center for Missing and Exploited Children saw a 1,325% increase in the number of tips it received related to AI-generated materials. That material is becoming more commonly found when investigating cases of possession of child sexual abuse materials.
On Sept. 5, a former Maine state probation officer pleaded guilty to accessing with intent to view child sexual abuse materials in federal court. When federal investigators searched the man’s Kik account, they found he had sought out the content and had at least one image that was “AI-generated,” according to court documents.
The explicit material generated by AI has rapidly become intertwined with the real stuff at the same time as his staff are seeing increasing reports. In 2020, Richards’ team received 700 tips relating to child sexual abuse materials and reports of adults sexually exploiting minors online in Maine.
By the end of 2025, Richards said he expects his team will have received more than 3,000 tips. They can only investigate about 14% any given year. His team now has to discard any material that is touched by AI.
“It’s not what could happen, it is happening, and this is not material that anyone is OK with in that it should be criminalized,” Shira Burns, the executive director of the Maine Prosecutors’ Association, said.
Across the country, 43 states have created laws outlawing sexual deepfakes, and an additional 28 states have banned the creation of AI-generated child sexual abuse material. Twenty-two states have done both, according to MultiState, a government relations firm that tracks how state legislatures have passed laws governing artificial intelligence.
More than two dozen states have enacted laws banning AI-generated child sexual abuse material. Rep. Amy Kuhn, D-Falmouth, proposed doing so earlier this year. But lawmakers on the Judiciary Committee had concerns about how the proposed legislation could cause constitutional issues.
She agreed to drop that portion of the bill for now. The version of the bill that passed expanded the state’s pre-existing law against “revenge porn” to include dissemination of altered or so-called “morphed images” as a form of harassment. But it did not label morphed images of children as child sexual abuse material.
The legislation, which was drafted chiefly by the Maine Prosecutors’ Association and the Maine Coalition Against Sexual Assault, was modeled after already enacted law in other places. Kuhn said she plans to propose the expanded definition of sexually explicit material mostly unchanged from her early version when the Legislature reconvenes in January.
Maine’s lack of a law at least labeling morphed images of children as child sexual abuse material makes the state an outlier, said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered AI. She studies the abusive uses of AI and the intersection of legislation around AI-generated content and the Constitution.
In her research, Pfefferkorn said she’s found that most legislatures that have considered changing pre-existing laws on child sexual abuse material have at least added that morphed images of children should be considered sexually explicit material.
“It’s a bipartisan area of interest to protect children online, and nobody wants to be the person sticking their hand up and very publicly saying, ‘I oppose this bill that would essentially better protect children online,’” Pfefferkorn said.
There is also pre-existing federal law and case law that Maine can look to in drafting its own legislation, she said. Morphed images of children are already banned federally, she said. While federal agencies have a role in investigating these cases, they typically handle only the most serious ones. It mostly falls on the state to police sexually explicit materials.
Come 2026, both Burns and Kuhn said they are confident that the Legislature will fix the loophole because there are plenty of model policies to follow across the country.
“We’re on the tail end of addressing this issue, but I am very confident that this is something that the judiciary will look at, and we will be able to get a version through, because it’s needed,” Burns said.
AI Research
Empowering clinicians with intelligence at the point of conversation

AI Research
Ethical robots and AI take center stage with support from National Science Foundation grant | Virginia Tech News

Building on success
Robot theater has been regularly offered at Eastern Montgomery Elementary School, Virginia Tech’s Child Development Center for Learning and Research, and the Valley Interfaith Child Care Center. In 2022, the project took center stage in the Cube during Ut Prosim Society Weekend with a professional-level performance about climate change awareness that combined robots, live music, and motion tracking.
The after-school program engages children through four creative modules: acting, dance, music and sound, and drawing. Each week includes structured learning and free play, giving students time to explore both creative expression and technical curiosity. Older children sometimes learn simple coding during free play, but the program’s focus remains on embodied learning, like using movement and play to introduce ideas about technology and ethics.
“It’s not a sit-down-and-listen kind of program,” Jeon said. “Kids use gestures and movement — they dance, they act, they draw. And through that, they encounter real ethical questions about robots and AI.”
Acting out the future of AI
The grant will allow the team to formalize the program’s foundation through literature reviews, focus groups, and workshops with educators and children. This research will help identify how young learners currently encounter ideas about robotics and AI and where gaps exist in teaching ethical considerations.
The expanded curriculum will weave in topics such as fairness, privacy, and bias in technology, inviting children to think critically about how robots and AI systems affect people’s lives. These concepts will be introduced not as abstract lessons or coding, but through storytelling, performance, and play.
“Students might learn about ethics relating to security and privacy during a module where they engage with a robot that tracks their movements while they dance,” Jeon said. “From there, there can be a guided discussion about how information collected from humans is used to train AI and robots.”
With the new National Science Foundation funding, researchers also plan to expand robot theater into museums and other informal learning environments, offering flexible formats such as one-day workshops and summer sessions. They will make the curriculum and materials openly available on GitHub and other platforms, ensuring educators and researchers nationwide can adapt the program to their own communities.
“This grant lets us expand what we’ve built and make it more robust,” Jeon said. “We can refine the program based on real needs and bring it to more children in more settings.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries