AI Insights
AI to stop prison violence before it happens

- Clampdown on violence in prisons as AI helps to identify dangerous prisoners and bring them under tight supervision
- AI will also be used to uncover secret messages sent by prisoners and stop weapons or contraband getting into prisons
- Ministry of Justice’s AI Action Plan sets out how tech will cut reoffending and make streets safe as part of Plan for Change
Under the Ministry of Justice’s AI Action Plan artificial intelligence predicts the risk an offender could pose and informs decisions to put dangerous prisoners under tighter supervision to cut crime and deliver swifter justice for victims. This will help to cut reoffending and make our streets safe, part of the Plan for Change.
AI will be used across prisons, probation and courts to better track offenders and assess the risk they pose with tools that can predict violence behind bars, uncover secret messages sent by prisoners and connect offender records across different systems.
The AI violence predictor analyses different factors such as a prisoner’s age and previous involvement in violent incidents while in custody. This will help prison officers assess threat levels on wings and intervene or move prisoners before violence escalates.
Another AI tool will be able to digitally scan the contents of mobile phones seized from prisoners to rapidly flag messages that could provide intelligence on potential crimes being committed behind bars, such as secret code words.
This will allow staff to discover potential threats of violence to other inmates or prison officers as well as plans to escape and smuggle in weapons or contraband.
These phones – often used for gang activity, drug trafficking and intimidation – are a major source of violence in prisons.
This technology, which uses AI-driven language analysis, has already been trialled across the prison estate and has analysed over 8.6 million messages from 33,000 seized phones.
Lord Chancellor and Secretary of State for Justice, Shabana Mahmood, said:
Artificial intelligence will transform the justice system. We are embracing its full potential as part of our Plan for Change.
These tools are already fighting violence in prisons, tracking offenders, and releasing our staff to focus on what they do best: cutting crime and making our streets safer.
The AI Action Plan also outlines how the department will create a single digital ID for all offenders with AI helping to link separate records across courts, prisons and probation for the first time.
This will match records that may never be linked through old search systems due to slight typos or missing words, meaning greater monitoring and more effective sentencing.
In the Probation Service, AI pilots have already shown a 50% reduction in note-taking time, allowing officers to focus on risk management, monitoring and face-to-face meetings with offenders.
Building on this success, the tool will be rolled out to all probation officers, and potentially in prisons and courts too.
The AI Action Plan also sets out how technology can ease pressure on courts and improve services for the public. This includes a digital assistant is being developed to help families resolve child arrangement disputes outside of court.
Alexander Iosad, Director of Government Innovation Policy at the Tony Blair Institute, said:
This Action Plan shows exactly the kind of ambition we need across government to embrace AI for a genuine renewal of our public services. If implemented well and at pace, these technologies won’t just ease the pressure on our prisons but also help offenders receive the personalised support they need for effective rehabilitation, making streets safer, and ensuring that victims facing incredibly difficult moments get the justice they deserve. This is what modern, data-driven public service reform to deliver real change for citizens should look like.
Earlier this year, the Lord Chancellor set out her vision for the Probation Service, which included a £8 million pledge to introduce new technology to help risk assess offenders and cut back on admin, increasing focus on those offenders who pose the greatest risk to the public.
In the Spending Review, the Government announced that the Probation Service will receive up to £700 million, an almost 45% increase in funding. This new funding will mean tens of thousands more offenders can be tagged and monitored in the community.
AI Insights
Anthropic to pay authors $1.5 billion in settlement over chatbot training material : NPR

Thriller novelist Andrea Bartz is photographed in her home Thursday in the Brooklyn borough of New York City.
Richard Drew/AP
hide caption
toggle caption
Richard Drew/AP
NEW YORK — Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.
The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.
The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement.
“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”
A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.
The Anthropic website and mobile phone app are shown in this photo in New York on July 5, 2024.
Richard Drew/AP
hide caption
toggle caption
Richard Drew/AP
A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.
If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.
“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.
U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.
Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”
“We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.
As part of the settlement, the company has also agreed to destroy the original book files it downloaded.
Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.
Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.
Debut thriller novel The Lost Night by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset.
Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.
The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.
On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.
“On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.
On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.
“It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.
The privately held Anthropic, founded by ex-OpenAI leaders in 2021, said Tuesday that it had raised another $13 billion in investments, putting its value at $183 billion.
Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
AI Insights
PR News | Will Artificial Intelligence Destroy the Communications Industry?

Simon Erskine Locke |
I recently met a leader in the communications industry, and as we were chatting over coffee, he shared that he’s been hearing the phrase “two things can be true at the same time” a lot recently. This is also something I’ve been saying for a couple of years in discussions around politics, AI, and a variety of other issues.
In a polarized world in which opinions are shared as fact, data and statistics are made to fit ideologies and the truth doesn’t seem to matter, expressing the view that two seemingly contradictory perspectives can both be true is a pragmatic way to find common ground. It recognizes that there are different ways to look at the same issues.
While making the effort to recognize different perspectives is healthy, idealogues (on either side of the political spectrum) are rarely interested in recognizing that there may be another side to an argument. When you are devoted to a particular position, the idea of an alternate version — or even the acknowledgement that there may be grey between black and white — creates cognitive dissonance.
Why bring this up? In part, because many of the discussions around AI seem to be somewhat bipolar.
For many, AI is still the shiny new tool that will write great emails, automate the lengthy process of engaging with journalists, or lead to faster and easier content generation. For others, AI will kill jobs, dumb down the industry, or lead us to an existential doomsday in which the rise of content leads to the fall of engagement.
As someone who has spent significant time with AI companies, building tools, working with various LLMs, and discussing the impact of AI with lawmakers, I firmly believe that there are reasons to be optimistic and pessimistic. It’s not all black and white.
One way to frame the discussion of AI is to think of it like electricity. Electricity is key to powering the economy and it drives machines that do a lot of different things. Some of those are good. Some are not. Electricity gives us light, but it can also kill us.
AI, like electricity, is not intrinsically good or bad. It’s what we do with it that matters. As communicators, we have agency. We decide which choices will shape the future of the industry. We are not powerless.
We are responsible for making decisions about how AI is employed. And, consequently, if we get this wrong, shame on us. If communicators ultimately put the industry out of business by automating the engagement process with journalists, mass producing content to game LLM algorithms, and delegating thinking to chatbots — rather than helping the next generation of communicators hone their writing, editing, fact checking, and critical thinking skills — that will be on us.
Equally, if we don’t leverage AI, we will miss an opportunity. AI can help streamline workflows and its access to the vast body of knowledge on the internet can lead to smarter, more informed engagement with reporters and impactful content.
A key takeaway from conversations with AI startups is that they are now able to do things that were simply not possible two years ago. One is making the restaurant booking process more efficient, leading to greater longevity of the businesses they work with – which keeps staff employed. Another company’s voice technology is enabling local government to serve constituents at any time and in any language.
As with every other generational technology shift, some jobs will disappear, and others will be created. Communicators need to avoid both being Panglossian, and the trap of seeing AI as the end of days.
Finding the right use cases and effectively implementing the technology will be essential. The customer service line of a major financial institution states, “We are using AI to deliver exceptional customer service”, only to require the customer to repeat the same basic information three times. This underscores the distance between AI’s potential and the imperfect experience most of us see every day.
Pragmatic agency and corporate communications leaders will continue to experiment, invest time to understand what is now possible with AI. They will need to implement tools selectively, while carefully considering the impact of decisions on the industry in the years to come.
At this stage, there is an element of the blind leading the blind with AI. Startups are not omniscient. Communicators looking at applications as a magic bullet are going to be sorely disappointed. We are already seeing questions about the returns on the rush of gold into AI, significant gaps between the vision and experience, and the dark side of the technology in areas such as rising fraud and malicious deepfakes. As I have written previously AI is creating new problems to solve – and is a driving force behind new solutions including content provenance authentication.
Just because you can do something doesn’t mean you should — without careful consideration of use cases, consequences and implementation. AI has both enormous potential but also brings a whole new set of challenges and, potentially, existential risks. The idea that these two seemingly opposite things can be true underscores the weight of responsibility we have to get this right.
***
Simon Erskine Locke is founder & CEO of CommunicationsMatch™ and cofounder & CEO of Tauth.io, which provides trusted content authentication based on C2PA standards. He is a former head of communications functions at Prudential Financial, Morgan Stanley and Deutsche Bank, and founder of communications consultancies.
AI Insights
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material

NEW YORK (AP) — Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by…
NEW YORK (AP) — Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.
The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.
The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement.
“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”
A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.
A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.
If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.
“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.
U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.
Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.
Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.
Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset.
Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.
The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.
On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
Copyright
© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics