AI Research
Maine police can’t investigate AI-generated child sexual abuse images

A Maine man went to watch a children’s soccer game. He snapped photos of kids playing. Then he went home and used artificial intelligence to take the otherwise innocuous pictures and turn them into sexually explicit images.
Police know who he is. But there is nothing they could do because the images are legal to have under state law, according to Maine State Police Lt. Jason Richards, who is in charge of the Computer Crimes Unit.
While child sexual abuse material has been illegal for decades under both federal and state law, the rapid development of generative AI — which uses models to create new content based on user prompts — means Maine’s definition of those images has lagged behind other states. Lawmakers here attempted to address the proliferating problem this year but took only a partial step.
“I’m very concerned that we have this out there, this new way of exploiting children, and we don’t yet have a protection for that,” Richards said.
Two years ago, it was easy to discern when a piece of material had been produced by AI, he said. It’s now hard to tell without extensive experience. In some instances, it can take a fully clothed picture of a child and make the child appear naked in an image known as a “deepfake.” People also train AI on child sexual abuse materials that are already online.
Nationally, the rise of AI-generated child sexual abuse material is a concern. At the end of last year, the National Center for Missing and Exploited Children saw a 1,325% increase in the number of tips it received related to AI-generated materials. That material is becoming more commonly found when investigating cases of possession of child sexual abuse materials.
On Sept. 5, a former Maine state probation officer pleaded guilty to accessing with intent to view child sexual abuse materials in federal court. When federal investigators searched the man’s Kik account, they found he had sought out the content and had at least one image that was “AI-generated,” according to court documents.
The explicit material generated by AI has rapidly become intertwined with the real stuff at the same time as his staff are seeing increasing reports. In 2020, Richards’ team received 700 tips relating to child sexual abuse materials and reports of adults sexually exploiting minors online in Maine.
By the end of 2025, Richards said he expects his team will have received more than 3,000 tips. They can only investigate about 14% any given year. His team now has to discard any material that is touched by AI.
“It’s not what could happen, it is happening, and this is not material that anyone is OK with in that it should be criminalized,” Shira Burns, the executive director of the Maine Prosecutors’ Association, said.
Across the country, 43 states have created laws outlawing sexual deepfakes, and an additional 28 states have banned the creation of AI-generated child sexual abuse material. Twenty-two states have done both, according to MultiState, a government relations firm that tracks how state legislatures have passed laws governing artificial intelligence.
More than two dozen states have enacted laws banning AI-generated child sexual abuse material. Rep. Amy Kuhn, D-Falmouth, proposed doing so earlier this year. But lawmakers on the Judiciary Committee had concerns about how the proposed legislation could cause constitutional issues.
She agreed to drop that portion of the bill for now. The version of the bill that passed expanded the state’s pre-existing law against “revenge porn” to include dissemination of altered or so-called “morphed images” as a form of harassment. But it did not label morphed images of children as child sexual abuse material.
The legislation, which was drafted chiefly by the Maine Prosecutors’ Association and the Maine Coalition Against Sexual Assault, was modeled after already enacted law in other places. Kuhn said she plans to propose the expanded definition of sexually explicit material mostly unchanged from her early version when the Legislature reconvenes in January.
Maine’s lack of a law at least labeling morphed images of children as child sexual abuse material makes the state an outlier, said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered AI. She studies the abusive uses of AI and the intersection of legislation around AI-generated content and the Constitution.
In her research, Pfefferkorn said she’s found that most legislatures that have considered changing pre-existing laws on child sexual abuse material have at least added that morphed images of children should be considered sexually explicit material.
“It’s a bipartisan area of interest to protect children online, and nobody wants to be the person sticking their hand up and very publicly saying, ‘I oppose this bill that would essentially better protect children online,’” Pfefferkorn said.
There is also pre-existing federal law and case law that Maine can look to in drafting its own legislation, she said. Morphed images of children are already banned federally, she said. While federal agencies have a role in investigating these cases, they typically handle only the most serious ones. It mostly falls on the state to police sexually explicit materials.
Come 2026, both Burns and Kuhn said they are confident that the Legislature will fix the loophole because there are plenty of model policies to follow across the country.
“We’re on the tail end of addressing this issue, but I am very confident that this is something that the judiciary will look at, and we will be able to get a version through, because it’s needed,” Burns said.
AI Research
Study reveals why humans adapt better than AI

Humans adapt to new situations through abstraction, while AI relies on statistical or rule-based methods, limiting flexibility in unfamiliar scenarios.
A new interdisciplinary study from Bielefeld University and other leading institutions explores why humans excel at adapting to new situations while AI systems often struggle. Researchers found humans generalise through abstraction and concepts, while AI relies on statistical or rule-based methods.
The study proposes a framework to align human and AI reasoning, defining generalisation, how it works, and how it can be assessed. Experts say differences in generalisation limit AI flexibility and stress the need for human-centred design in medicine, transport, and decision-making.
Researchers collaborated across more than 20 institutions, including Bielefeld, Bamberg, Amsterdam, and Oxford, under the SAIL project. The initiative aims to develop AI systems that are sustainable, transparent, and better able to support human values and decision-making.
Interdisciplinary insights may guide the responsible use of AI in human-AI teams, ensuring machines complement rather than disrupt human judgement.
The findings underline the importance of bridging cognitive science and AI research to foster more adaptable, trustworthy, and human-aligned AI systems capable of tackling complex, real-world challenges.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI Research
Josh Bersin Company Research Reveals How Talent Acquisition Is Being Revolutionized by AI

-
Jobs aren’t disappearing. Through AI, talent acquisition is fast evolving from hand-crafted interviewing and recruiting to a data-driven model that ensures the right talent is hired at the right time, for the right role with unmatched accuracy
-
Traditional recruiting isn’t working: in 2024, only 17% of applicants received interviews and 60% abandoned slow application processes
-
AI drives 2–3x faster hiring, stronger candidate quality, sharper targeting—and 95% candidate satisfaction at Foundever, from 200,000+ applicants in just six months
OAKLAND, Calif., Sept. 16, 2025 /PRNewswire/ — The Josh Bersin Company, the world’s most trusted HR advisory firm, today released new research showing that jobs aren’t disappearing—they’re being matched with greater intelligence. The research, produced in collaboration with AMS, reveals major advances in talent acquisition (TA) driven by AI-enabled technology, which are yielding 2–3x faster time to hire, stronger candidate-role matches, and unprecedented precision in sourcing.
The global market for recruiting, hiring, and staffing is over $850 billion and is growing at 13% per year, despite the economic slowdown, though signs of strain are evident. This means TA leaders are turning to AI to adapt, as AI transforms jobs, creates the need for new roles, new skills, and AI expertise.
According to the research and advisory firm, even without AI disruption, over 20% of employees consider changing jobs each year, driving demand for a new wave of high-precision, AI-powered tools for assessment, interviewing, selection, and hiring. Companies joining this AI revolution are hiring 200-300% faster, with greater accuracy and efficiency than their peers, despite the job market slowdown.
According to the report, The Talent Acquisition Revolution: How AI is Transforming Recruiting, the TA automation revolution is delivering benefits across the hiring ecosystem: job seekers experience faster recognition and better fit, while employers gain accurate, real-time, and highly scalable recruitment.
This is against a context of failure with current hiring. In 2024, less than one in four (17%) of applicants made it to the interview stage, and 60% of job seekers, due to too-slow hiring portals, abandoned the whole application process.
The research shows how organizations are already realizing benefits such as lower hiring costs, stronger internal mobility, and higher productivity. AI-empowered TA teams are also streamlining operations by shifting large portions of manual, admin-heavy work to specialized vendors.
AI Research
Causaly Introduces First Agentic AI Platform Built for Life Sciences Research and Development
Specialized AI agents automate research workflows and accelerate
drug discovery and development with transparent, evidence-backed insights
LONDON, Sept. 16, 2025 /PRNewswire/ — Causaly today introduced Causaly Agentic Research, an agentic AI breakthrough that delivers the transparency and scientific rigor that life sciences research and development demands. First-of-their-kind, specialized AI agents access, analyze, and synthesize comprehensive internal and external biomedical knowledge and competitive intelligence. Scientists can now automate complex tasks and workflows to scale R&D operations, discover novel insights, and drive faster decisions with confidence, precision, and clarity.
Industry-specific scientific AI agents
Causaly Agentic Research builds on Causaly Deep Research with a conversational interface that lets users interact directly with Causaly AI research agents. Unlike legacy literature review tools and general-purpose AI tools, Causaly Agentic Research uses industry-specific AI agents built for life sciences R&D and securely combines internal and external data to create a single source of truth for research. Causaly AI agents complete multi-step tasks across drug discovery and development, from generating and testing hypotheses to producing structured, transparent results always backed by evidence.
“Agentic AI fundamentally changes how life sciences conducts research,” said Yiannis Kiachopoulos, co-founder and CEO of Causaly. “Causaly Agentic Research emulates the scientific process, automatically analyzing data, finding biological relationships, and reasoning through problems. AI agents work like digital assistants, eliminating manual tasks and dependencies on other teams, so scientists can access more diverse evidence sources, de-risk decision-making, and focus on higher-value work.”
Solving critical research challenges
Research and development teams need access to vast amounts of biomedical data, but manual and siloed processes slow research and create long cycle times for getting treatments to market. Scientists spend weeks analyzing narrow slices of data while critical insights remain hidden. Human biases influence decisions, and the volume of scientific information overwhelms traditional research approaches.
Causaly addresses these challenges as the first agentic AI platform for scientists that combines extensive biomedical information with competitive intelligence and proprietary datasets. With a single, intelligent interface for scientific discovery that fits within scientists’ existing workflows, research and development teams can eliminate silos, improve productivity, and accelerate scientific ideas to market.
Comprehensive agentic AI research platform
As part of the Causaly platform, Causaly Agentic Research provides scientists multiple AI agents that collaborate to:
- Conduct complex analysis and provide answers that move research forward
- Verify quality and accuracy to dramatically reduce time-to-discovery
- Continuously scan the scientific landscape to surface critical signals and emerging evidence in real time
- Deliver fully traceable insights that help teams make confident, evidence-backed decisions while maintaining scientific rigor for regulatory approval
- Connect seamlessly with internal systems, public applications, data sources, and even other AI agents, unifying scientific discovery
Availability
Causaly Agentic Research will be available in October 2025, with a conversational interface and foundational AI agents to accelerate drug discovery and development. Additional specialized AI agents are planned for availability by the end of the year.
Explore how Causaly Agentic Research can redefine your R&D workflows and bring the future of drug development to your organization at causaly.com/products/agentic-research.
About Causaly
Causaly is a leader in AI for the life sciences industry. Leading biopharmaceutical companies use the Causaly AI platform to find, visualize, and interpret biomedical knowledge and automate critical research workflows. To learn how Causaly is accelerating drug discovery through transformative AI technologies and getting critical treatments to patients faster, visit www.causaly.com.
Logo – https://mma.prnewswire.com/media/2653240/Causaly_Logo_Logo.jpg
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries