AI Insights
AI hallucination in Mike Lindell case serves as a stark warning : NPR
MyPillow CEO Mike Lindell arrives at a gathering of supporters of Donald Trump near Trump’s residence in Palm Beach, Fla., on April 4, 2023. On July 7, 2025, Lindell’s lawyers were fined thousands of dollars for submitting a legal filing riddled with AI-generated mistakes.
Octavio Jones/Getty Images
hide caption
toggle caption
Octavio Jones/Getty Images
A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn’t exist.
Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February filled with more than two dozen mistakes — including hallucinated cases, meaning fake cases made up by AI tools, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday.
“Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” Wang wrote in her decision. “Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice.”
The use of AI by lawyers in court is not, itself illegal. But Wang found the lawyers violated a federal rule that requires lawyers to certify that claims they make in court are “well grounded” in the law. Turns out, fake cases don’t meet that bar.
Kachouroff and DeMaster didn’t respond to NPR’s request for comment.
The error-riddled court filing was part of a defamation case involving Lindell, the MyPillow creator, President Trump supporter and conspiracy theorist known for spreading lies about the 2020 election. Last month, Lindell lost this case being argued in front of Wang. He was ordered to pay Eric Coomer, a former employee of Denver-based Dominion Voting Systems, more than $2 million after claiming Coomer and Dominion used election equipment to flip votes to former President Joe Biden.
The financial sanctions, and reputational damage, for the two lawyers are a stark reminder for attorneys who, like many others, are increasingly using artificial intelligence in their work, according to Maura Grossman, a professor at the University of Waterloo’s David R. Cheriton School of Computer Science and an adjunct law professor at York University’s Osgoode Hall Law School.
Grossman said the $3,000 fines “in the scheme of things was reasonably light, given these were not unsophisticated lawyers who just really wouldn’t know better. The kind of errors that were made here … were egregious.”
There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases, Grossman said. It’s become a familiar trend in courtrooms across the country: Lawyers are sanctioned for submitting motions and other court filings filled with case citations that are not real and created by generative AI.
Damien Charlotin tracks court cases from across the world where generative AI produced hallucinated content and where a court or tribunal specifically levied warnings or other punishments. There are 206 cases identified as of Thursday — and that’s only since the spring, he told NPR. There were very few cases before April, he said, but for months since there have been cases “popping up every day.”
Charlotin’s database doesn’t cover every single case where there is a hallucination. But he said, “I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it’s very embarrassing for everyone involved.”
What went wrong in the MyPillow filing
The $3,000 fine for each attorney, Judge Wang wrote in her order this week, is “the least severe sanction adequate to deter and punish defense counsel in this instance.”
The judge wrote that the two attorneys didn’t provide any proper explanation of how these mistakes happened, “most egregiously, citation of cases that do not exist.”
Wang also said Kachouroff and DeMaster were not forthcoming when questioned about whether the motion was generated using artificial intelligence.
Kachouroff, in response, said in court documents that it was DeMaster who “mistakenly filed” a draft version of this filing rather than the right copy that was more carefully edited and didn’t include hallucinated cases.
But Wang wasn’t persuaded that the submission of the filing was an “inadvertent error.” In fact, she called out Kachouroff for not being honest when she questioned him.
“Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence,” Wang wrote.
Grossman advised other lawyers who find themselves in the same position as Kachouroff to not attempt to cover it up, and fess up to the judge as soon as possible.
“You are likely to get a harsher penalty if you don’t come clean,” she said.
An illustration picture shows ChatGPT artificial intelligence software, which generates human-like conversation, in February 2023 in Lierde, Belgium. Experts say AI can be incredibly useful for lawyers — they just have to verify their work.
Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images
hide caption
toggle caption
Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images
Trust and verify
Charlotin has found three main issues when lawyers, or others, use AI to file court documents: The first are the fake cases created, or hallucinated, by AI chatbots.
The second is AI creates a fake quote from a real case.
The third is harder to spot, he said. That’s when the citation and case name are correct but the legal argument being cited is not actually supported by the case that is sourced, Charlotin said.
This case involving the MyPillow lawyers is just a microcosm of the growing dilemma of how courts and lawyers can strike the balance between welcoming life-changing technology and using it responsibly in court. The use of AI is growing faster than authorities can make guardrails around its use.
It’s even being used to present evidence in court, Grossman said, and to provide victim impact statements.
Earlier this year, a judge on a New York state appeals court was furious after a plaintiff, representing himself, tried to use a younger, more handsome AI-generated avatar to argue his case for him, CNN reported. That was swiftly shut down.
Despite the cautionary tales that make headlines, both Grossman and Charlotin view AI as an incredibly useful tool for lawyers and one they predict will be used in court more, not less.
Rules over how best to use AI differ from one jurisdiction to the next. Judges have created their own standards, requiring lawyers and those representing themselves in court to submit AI disclosures when it’s been used. In a few instances judges in North Carolina, Ohio, Illinois and Montana have established various prohibitions on the use of AI in their courtrooms, according to a database created by the law firm Ropes & Gray.
The American Bar Association, the national representative of the legal profession, issued its first ethical guidance on the use of AI last year. The organization warned that because these tools “are subject to mistakes, lawyers’ uncritical reliance on content created by a [generative artificial intelligence] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties.”
It continued, “Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation …”
The Advisory Committee on Evidence Rules, the group responsible for studying and recommending changes to the national rules of evidence for federal courts, has been slow to act and is still working on amendments for the use of AI for evidence.
In the meantime, Grossman has this suggestion for anyone who uses AI: “Trust nothing, verify everything.”
AI Insights
Ramp Debuts AI Agents Designed for Company Controllers
Financial operations platform Ramp has debuted its first artificial intelligence (AI) agents.
AI Insights
How automation is using the latest technology across various sectors
A majority of small businesses are using artificial intelligence and finding out it can save time and money.
Artificial Intelligence and automation are often used interchangeably. While the technologies are similar, the concepts are different. Automation is often used to reduce human labor for routine or predictable tasks, while A.I. simulates human intelligence that can eventually act independently.
“Artificial intelligence is a way of making workers more productive, and whether or not that enhanced productivity leads to more jobs or less jobs really depends on a field-by-field basis,” said senior advisor Gregory Allen with the Wadhwani A.I. center at the Center for Strategic and International Studies. “Past examples of automation, such as agriculture, in the 1920s, roughly one out of every three workers in America worked on a farm. And there was about 100 million Americans then. Fast forward to today, and we have a country of more than 300 million people, but less than 1% of Americans do their work on a farm.”
A similar trend happened throughout the manufacturing sector. At the end of the year 2000, there were more than 17 million manufacturing workers according to the U.S. Bureau of Labor statistics and the Federal Reserve Bank of St. Louis. As of June, there are 12.7 million workers. Research from the University of Chicago found, while automation had little effect on overall employment, robots did impact the manufacturing sector.
“Tractors made farmers vastly more productive, but that didn’t result in more farming jobs. It just resulted in much more productivity in agriculture,” Allen said.
ARTIFICIAL INTELLIGENCE DRIVES DEMAND FOR ELECTRIC GRID UPDATE
Researchers are able to analyze the performance of Major League Baseball pitchers by using A.I. algorithms and stadium camera systems. (University of Waterloo / Fox News)
According to our Fox News Polling, just 3% of voters expressed fear over A.I.’s threat to jobs when asked about their first reaction to the technology without a listed set of responses. Overall, 43% gave negative reviews while 26% reacted positively.
Robots now are being trained to work alongside humans. Some have been built to help with household chores, address worker shortages in certain sectors and even participate in robotic sporting events.
The most recent data from the International Federation of Robotics found more than 4 million robots working in factories around the world in 2023. 70% of new robots deployed that year, began work alongside humans in Asia. Many of those now incorporate artificial intelligence to enhance productivity.
“We’re seeing a labor shortage actually in many industries, automotive, transportation and so on, where the older generation is going into retirement. The middle generation is not interested in those tasks anymore and the younger generation for sure wants to do other things,” Arnaud Robert with Hexagon Robotics Division told Reuters.
Hexagon is developing a robot called AEON. The humanoid is built to work in live industrial settings and has an A.I. driven system with special intelligence. Its wheels help it move four times faster than humans typically walk. The bot can also go up steps while mapping its surroundings with 22 sensors.
ARTIFICIAL INTELLIGENCE FUELS BIG TECH PARTNERSHIPS WITH NUCLEAR ENERGY PRODUCERS
Researchers are able to create 3D models of pitchers, which athletes and trainers could study from multiple angles. (University of Waterloo)
“What you see with technology waves is that there is an adjustment that the economy has to make, but ultimately, it makes our economy more dynamic,” White House A.I. and Crypto Czar David Sacks said. “It increases the wealth of our economy and the size of our economy, and it ultimately improves productivity and wages.”
Driverless cars are also using A.I. to safely hit the road. Waymo uses detailed maps and real-time sensor data to determine its location at all times.
“The more they send these vehicles out with a bunch of sensors that are gathering data as they drive every additional mile, they’re creating more data for that training data set,” Allen said.
Even major league sports are using automation, and in some cases artificial intelligence. Researchers at the University of Waterloo in Canada are using A.I. algorithms and stadium camera systems to analyze Major League Baseball pitcher performance. The Baltimore Orioles joint-funded the project called Pitchernet, which could help improve form and prevent injuries. Using Hawk-Eye Innovations camera systems and smartphone video, researchers created 3D models of pitchers that athletes and trainers could study from multiple angles. Unlike most video, the models remove blurriness, giving a clearer view of the pitcher’s movements. Researchers are also exploring using the Pitchernet technology in batting and other sports like hockey and basketball.
ELON MUSK PREDICTS ROBOTS WILL OUTSHINE EVEN THE BEST SURGEONS WITHIN 5 YEARS
Overview of a PitcherNet System graphics analyzing a pitcher’s baseball throw. (University of Waterloo)
The same technology is also being used as part of testing for an Automated Ball-Strike System, or ABS. Triple-A minor league teams have been using the so-called robot umpires for the past few seasons. Teams tested both situations in which the technology called every pitch and when it was used as challenge system. Major League Baseball also began testing the challenge system in 13 of its spring training parks across Florida and Arizona this February and March.
Each team started a game with two challenges. The batter, pitcher and catcher were the only players who could contest a ball-strike call. Teams lost a challenge if the umpire’s original call was confirmed. The system allowed umpires to keep their jobs, while strike zone calls were slightly more accurate. According to MLB, just 2.6% of calls were challenged throughout spring training games that incorporated ABS. 52.2% of those challenges were overturned. Catchers had the highest success rate at 56%, followed by batters at 50% and pitchers at 41%.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Triple-A announced last summer it would shift to a full challenge system. MLB commissioner Rob Manfred said in June, MLB could incorporate the automated system into its regular season as soon as 2026. The Athletic reports, major league teams would use the same challenge system from spring training, with human umpires still making the majority of the calls.
Many companies across other sectors agree that machines should not go unsupervised.
“I think that we should always ensure that AI remains under human control,” Microsoft Vice Chair and President Brad Smith said. “One of first proposals we made early in 2023 was to insure that A.I., always has an off switch, that it has an emergency brake. Now that’s the way high-speed trains work. That’s the way the school buses, we put our children on, work. Let’s ensure that AI works this way as well.”
AI Insights
Artificial intelligence predicts which South American cities will disappear by 2100
The effects of global warming and climate change are being felt around the world. Extreme weather events are expected to become more frequent from droughts to floods wreaking havoc on communities as well as blistering heatwaves and bone-chilling cold snaps.
While these will affect localized areas temporarily, one inescapable consequence of the increasing temperatures for costal communities around the globe is rising sea levels. This phenomenon will have even more far-reaching effects, displacing hundreds of millions of people as coastal communities are inundated by water, some permanently.
These South American cities will disappear
While there is no doubt that sea levels will rise, predicting exactly how much they will in any given location is a tricky business. This is because oceans don’t rise uniformly as more water is added to the total volume.
However, according to models from the Intergovernmental Panel on Climate Change (IPCC) the most optimistic scenario is between 11 inches and almost 22 inches, if we can curb carbon emissions and keep the temperature rise to 1.5C by 2050. The worst case scenario would be 6 and a half feet by the end of the century.
Caracol Radio in Colombia asked various artificial intelligence systems which cities in South America would disappear due to rising sea levels within the next 200 years. These are the ones most at risk according to their findings:
- Santos, Brazil
- Macaió, Brazil
- Floreanópolis, Brazil
- Mar de Plata, Argentina
- Barranquilla, Colombia
- Lima, Peru
- Cartagena, Colombia
- Paramaribo, Surinam
- Georgetown, Guayana
The last two will be underwater by the end of the century according to modeling done by the non-profit Climate Central along with numerous other communities in low-lying coastal areas.
Their simulator only makes forecasts until the year 2100 as the above image shows for the areas along the northeastern coast of South America including Paramaribo and Georgetown.
Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.
Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions