Tools & Platforms
EvenUp Announces AI Playbooks and Voice Agent to Redefine Case Analysis and Client Communication
The category leader in AI for personal injury expands its platform with two new products while continuing its rapid pace of innovation with upgrades to the AI Drafts Suite.
SAN FRANCISCO, July 08, 2025–(BUSINESS WIRE)–EvenUp, the category leader in AI for personal injury (PI) law, today announced two new products that push the boundaries of legal technology: AI Playbooks and Voice Agent, alongside major enhancements to its rapidly growing AI Drafts™ product suite.
Trusted by over 1,500 of the nation’s most successful PI firms—including Omega Law Group and Sweet James—and powered by the industry’s largest personal injury dataset, EvenUp’s innovative AI products streamline case analysis and client communication, ensuring no important detail is missed, freeing up case managers to focus on strategic work, and enabling firms to move faster from intake to resolution.
With these releases, EvenUp is setting a new standard for how PI firms operate—streamlining casework, generating over $7B in claims, and unlocking hundreds of millions in missed value.
AI Playbooks: Put Case Analysis on Autopilot at Every Stage of the Case
Now part of The Claims Intelligence Platform™, AI Playbooks reads your case files and pulls out the insights you need every time you upload new documents—equipping firms to make faster, smarter decisions across their entire caseload. With staff often juggling 80+ cases each, AI Playbooks uses EvenUp’s system of specialized AI models, Piai™, to scan records, spot patterns, and highlight facts that matter most to firms. It replaces hours of manual file review with up-to-date case analysis.
AI Playbooks helps firms:
-
Drive proactive case decisions by automatically flagging issues that could compromise case value, such as liability, prior injuries, or conflicting testimony.
-
Identify high-value opportunities with AI that proactively flags cases with TBI indicators, commercial defendants, and DUI indicators so you can staff and prioritize accordingly.
-
Standardize case evaluation with automated analysis to consistently apply firm policies.
Whether firms are evaluating a case at intake, preparing for negotiation, or handing off a case to litigation, AI Playbooks makes it easier to manage critical workflow processes and protect value at every stage.
“AI Playbooks is a powerful tool that is transforming how we analyze cases by proactively surfacing the exact criteria we need to assess and compare large inventories of cases. EvenUp reviews vast volumes of documents with striking speed and nuance, backing every finding with clear citations our team can trust. They are delivering the speed, precision, and scale we need to stay ahead.” – Max Schuver, Sr. Mass Torts Counsel, Walkup
Tools & Platforms
AI-generated child sexual abuse videos surging online, watchdog says | Internet
The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.
The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.
In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.
The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.
The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.
“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.
The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.
The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.
IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.
The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.
Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.
“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.
The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.
The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content. People found to have breached the new law will face up to five years in jail.
Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.
Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.
AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.
Tools & Platforms
Top AI performers are burned out and eyeing a better workplace

Here’s data that’s sure to get the attention of HR leaders: The employees delivering your biggest AI-driven productivity gains are twice as likely to quit as everyone else.
A new study by the Upwork Research Institute, the research arm of the remote job platform Upwork, reveals that nearly 9 in 10 top AI performers are burned out and eyeing the exits. Meanwhile, more than two-thirds say they trust the technology more than they do their coworkers — with 64% finding machines to be more polite and empathetic.
AI is “unlocking speed and scale but also reshaping how we collaborate and connect as humans,” said Kelly Monahan, managing director of the Upwork Research Institute. “The productivity paradox we’re seeing may be a natural growing pain of traditional work systems, ones that reward output with AI, but overlook the human relationships behind that work.”
According to the study, based on the perspectives of 2,500 workers globally, the emotional dimension around AI runs deeper than many employers may realize. Nearly half of those surveyed say “please” and “thank you” with every request they submit to AI, while 87% phrase their requests as if speaking to a human—an anthropomorphizing of AI tools indicating that employees are forming more genuine emotional connections with their digital assistants than with their colleagues.
Colin Rocker, a content creator specializing in career development, makes the point that “AI will always be the most agreeable coworker, but we have to also be mindful that it’s a system that, by nature, will agree with and amplify whatever is said to it.”
The study also revealed a disconnect between individual AI adoption and organizational strategy. While employees are racing ahead with AI integration, 62% of high-performing AI users say they don’t understand how their daily AI use aligns with company goals. That misalignment creates a dangerous scenario where the most productive employees feel isolated from the broader organizational mission, even as they’re delivering exceptional results.
The contrast with freelancers is illuminating, meanwhile. Unlike full-time employees, independent contractors appear to thrive alongside AI, with nearly nine in 10 reporting a positive impact on their work. These workers use AI primarily as a learning partner, with 90% saying it helps them acquire new skills faster and 42% crediting it with helping them specialize in a particular niche — suggesting that the problem is not technology itself but, rather, how it’s being integrated into traditional organizational structures.
Ultimately, the survey suggests, the path to sustainable, AI-empowered businesses requires reimagining work as a collaboration between the technology and the people who use it; cultivating flexible and resilient talent ecosystems; and redefining AI strategies around relationships, emerging AI roles and responsible governance.
To lead effectively in the age of AI, Monahan suggests that employers “need to redesign work in ways that support not just efficiency but also well-being, trust and long-term resilience.”
Tools & Platforms
In test-obsessed Korea, AI boom arrives in exams, ahead of the technology itself
Over 500 new AI certifications have sprung up in Korea in two years, but few are trusted or even taken
A wave of artificial intelligence certifications has flooded the market in South Korea over the past two years.
But according to government data, most of these tests exist only on paper, and have never been used by a single person.
As of Wednesday, there were 505 privately issued AI-related certifications registered with the Korea Research Institute for Professional Education and Training, a state-funded body under the Prime Minister’s Office.
This is nearly five times the number recorded in 2022, before tools like ChatGPT captured global attention. But more than 90 percent of those certifications had zero test-takers as of late last year, the institute’s own data shows.
Many of the credentials are loosely tied to artificial intelligence in name only. Among recent additions are titles like “AI Brain Fitness Coach,” “AI Art Storybook Author,” and “AI Trainer,” which often have no connection to real AI technology.
Only one of the 505 AI-related certifications — KT’s AICE exam — has received official recognition from the South Korean government. The rest have been registered by individuals, companies, or private organizations, with no independent oversight or quality control.
In 2024, just 36 of these certifications held any kind of exam. Only two had more than 1,000 people apply. Fourteen had a perfect 100 percent pass rate. And 20 were removed from the registry that same year.
For test organizers, the appeal is often financial. One popular certification that attracted around 500 candidates last year charged up to 150,000 won ($110) per person, including test fees and course materials. The content reportedly consisted of basic instructions on how to use existing tools like ChatGPT or Stable Diffusion. Some issuers even promote these credentials as qualifications to teach AI to students or the general public.
The people signing up tend to be those anxious about keeping up in an AI-driven world. A survey released this week by education firm Eduwill found that among 391 South Koreans in their 20s to 50s, 39.1 percent said they planned to earn an AI certificate to prepare for the digital future. Others (27.6 percent) said they were taking online AI courses or learning how to use automation tools like Notion AI.
Industry officials warn that most of these certificates hold little value in the job market. Jeong Sung-hoon, communications manager at Seoul-based AI startup Wrtn, told The Korea Herald that these credentials are often “window dressing” for resumes.
Wrtn ranked second in generative AI app usage among Koreans under 30 this March, according to local mobile analytics firm Wiseapp.
“Most private AI certifications aren’t taken seriously by hiring managers,” Jeong said. “Even for non-technical jobs like communications or marketing, what matters more is whether someone actually understands the AI space. That can’t be faked with a certificate.”
mjh@heraldcorp.com
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children