Connect with us

AI Research

A recent high-profile case of AI hallucination serves as a stark warning

Published

on


A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn’t exist.

Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February filled with more than two dozen mistakes — including hallucinated cases, meaning fake cases made up by AI tools, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday.

“Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” Wang wrote in her decision. “Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice.”

The use of AI by lawyers in court is not, itself illegal. But Wang found the lawyers violated a federal rule that requires lawyers to certify that claims they make in court are “well grounded” in the law. Turns out, fake cases don’t meet that bar.

Kachouroff and DeMaster didn’t respond to NPR’s request for comment.

The error-riddled court filing was part of a defamation case involving Lindell, the MyPillow creator, President Trump supporter and conspiracy theorist known for spreading lies about the 2020 election. Last month, Lindell lost this case being argued in front of Wang. He was ordered to pay Eric Coomer, a former employee of Denver-based Dominion Voting Systems, more than $2 million after claiming Coomer and Dominion used election equipment to flip votes to former President Joe Biden.

The financial sanctions, and reputational damage, for the two lawyers are a stark reminder for attorneys who, like many others, are increasingly using artificial intelligence in their work, according to Maura Grossman, a professor at the University of Waterloo’s David R. Cheriton School of Computer Science and an adjunct law professor at York University’s Osgoode Hall Law School.

Grossman said the $3,000 fines “in the scheme of things was reasonably light, given these were not unsophisticated lawyers who just really wouldn’t know better. The kind of errors that were made here … were egregious.”

There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases, Grossman said. It’s become a familiar trend in courtrooms across the country: Lawyers are sanctioned for submitting motions and other court filings filled with case citations that are not real and created by generative AI.

Damien Charlotin tracks court cases from across the world where generative AI produced hallucinated content and where a court or tribunal specifically levied warnings or other punishments. There are 206 cases identified as of Thursday — and that’s only since the spring, he told NPR. There were very few cases before April, he said, but for months since there have been cases “popping up every day.”

Charlotin’s database doesn’t cover every single case where there is a hallucination. But he said, “I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it’s very embarrassing for everyone involved.”

What went wrong in the MyPillow filing

The $3,000 fine for each attorney, Judge Wang wrote in her order this week, is “the least severe sanction adequate to deter and punish defense counsel in this instance.”

The judge wrote that the two attorneys didn’t provide any proper explanation of how these mistakes happened, “most egregiously, citation of cases that do not exist.”

Wang also said Kachouroff and DeMaster were not forthcoming when questioned about whether the motion was generated using artificial intelligence.

Kachouroff, in response, said in court documents that it was DeMaster who “mistakenly filed” a draft version of this filing rather than the right copy that was more carefully edited and didn’t include hallucinated cases.

But Wang wasn’t persuaded that the submission of the filing was an “inadvertent error.” In fact, she called out Kachouroff for not being honest when she questioned him.

“Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence,” Wang wrote.

Grossman advised other lawyers who find themselves in the same position as Kachouroff to not attempt to cover it up, and fess up to the judge as soon as possible.

“You are likely to get a harsher penalty if you don’t come clean,” she said.

Trust and verify

Charlotin has found three main issues when lawyers, or others, use AI to file court documents: The first are the fake cases created, or hallucinated, by AI chatbots.

The second is AI creates a fake quote from a real case.

The third is harder to spot, he said. That’s when the citation and case name are correct but the legal argument being cited is not actually supported by the case that is sourced, Charlotin said.

This case involving the MyPillow lawyers is just a microcosm of the growing dilemma of how courts and lawyers can strike the balance between welcoming life-changing technology and using it responsibly in court. The use of AI is growing faster than authorities can make guardrails around its use.

It’s even being used to present evidence in court, Grossman said, and to provide victim impact statements.

Earlier this year, a judge on a New York state appeals court was furious after a plaintiff, representing himself, tried to use a younger, more handsome AI-generated avatar to argue his case for him, CNN reported. That was swiftly shut down.

Despite the cautionary tales that make headlines, both Grossman and Charlotin view AI as an incredibly useful tool for lawyers and one they predict will be used in court more, not less.

Rules over how best to use AI differ from one jurisdiction to the next. Judges have created their own standards, requiring lawyers and those representing themselves in court to submit AI disclosures when it’s been used. In a few instances judges in North Carolina, Ohio, Illinois and Montana have established various prohibitions on the use of AI in their courtrooms, according to a database created by the law firm Ropes & Gray.

The American Bar Association, the national representative of the legal profession, issued its first ethical guidance on the use of AI last year. The organization warned that because these tools “are subject to mistakes, lawyers’ uncritical reliance on content created by a [generative artificial intelligence] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties.”

It continued, “Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation …”

The Advisory Committee on Evidence Rules, the group responsible for studying and recommending changes to the national rules of evidence for federal courts, has been slow to act and is still working on amendments for the use of AI for evidence.

In the meantime, Grossman has this suggestion for anyone who uses AI: “Trust nothing, verify everything.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

How IBM helped Lockheed Martin streamline its data landscape and fuel its AI

Published

on

By


ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it’s cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.





Source link

Continue Reading

AI Research

Indonesia on Track to Achieve Sovereign AI Goals With NVIDIA, Cisco and IOH

Published

on


As one of the world’s largest emerging markets, Indonesia is making strides toward its “Golden 2045 Vision” — an initiative tapping digital technologies and bringing together government, enterprises, startups and higher education to enhance productivity, efficiency and innovation across industries.

Building out the nation’s AI infrastructure is a crucial part of this plan.

That’s why Indonesian telecommunications leader Indosat Ooredoo Hutchison, aka Indosat or IOH, has partnered with Cisco and NVIDIA to support the establishment of Indonesia’s AI Center of Excellence (CoE). Led by the Ministry of Communications and Digital Affairs, called Komdigi, the CoE aims to advance secure technologies, cultivate local talent and foster innovation through collaboration with startups.

Indosat Ooredoo Hutchison President Director and CEO Vikram Sinha, Cisco Chair and CEO Chuck Robbins and NVIDIA Senior Vice President of Telecom Ronnie Vasishta today detailed the purpose and potential of the CoE during a fireside chat at Indonesia AI Day, a conference focused on how artificial intelligence can fuel the nation’s digital independence and economic growth.

As part of the CoE, a new NVIDIA AI Technology Center will offer research support, NVIDIA Inception program benefits for eligible startups, and NVIDIA Deep Learning Institute training and certification to upskill local talent.

“With the support of global partners, we’re accelerating Indonesia’s path to economic growth by ensuring Indonesians are not just users of AI, but creators and innovators,” Sinha added.

“The AI era demands fundamental architectural shifts and a workforce with digital skills to thrive,” Robbins said. “Together with Indosat, NVIDIA and Komdigi, Cisco will securely power the AI Center of Excellence — enabling innovation and skills development, and accelerating Indonesia’s growth.”

“Democratizing AI is more important than ever,” Vasishta added. “Through the new NVIDIA AI Technology Center, we’re helping Indonesia build a sustainable AI ecosystem that can serve as a model for nations looking to harness AI for innovation and economic growth.”

Making AI More Accessible

The Indonesia AI CoE will comprise an AI factory that features full-stack NVIDIA AI infrastructure — including NVIDIA Blackwell GPUs, NVIDIA Cloud Partner reference architectures and NVIDIA AI Enterprise software — as well as an intelligent security system powered by Cisco.

Called the Sovereign Security Operations Center Cloud Platform, the Cisco-powered system combines AI-based threat detection, localized data control and managed security services for the AI factory.

Building on the sovereign AI initiatives Indonesia’s technology leaders announced with NVIDIA last year, the CoE will bolster the nation’s AI strategy through four core pillars:

Graphic includes four core pillars of the work's strategic approach. 1) Sovereign Infrastructure: Establishing AI infrastructure for secure, scalable, high-performance AI workloads tailored to Indonesia’s digital ambitions. 2) Secure AI Workloads: Using Cisco’s intelligent infrastructure to connect and safeguard the nation’s digital assets and intellectual property. 3) AI for All: Giving hundreds of millions of Indonesians access to AI by 2027, breaking down geographical barriers and empowering developers across the nation. 4) Talent and Development Ecosystem: Aiming to equip 1 million people with digital skills in networking, security and AI by 2027.

Some 28 independent software vendors and startups are already using IOH’s NVIDIA-powered AI infrastructure to develop cutting-edge technologies that can speed and ease workflows across higher education and research, food security, bureaucratic reform, smart cities and mobility, and healthcare.

With Indosat’s coverage across the archipelago, the company can reach hundreds of millions of Bahasa Indonesian speakers with its large language model (LLM)-powered applications.

For example, using Indosat’s Sahabat-AI collection of Bahasa Indonesian LLMs, the Indonesia government and Hippocratic AI are collaborating to develop an AI agent system that provides preventative outreach capabilities, such as helping women subscribers over the age of 50 schedule a mammogram. This can help prevent or combat breast cancer and other health complications across the population.

Separately, Sahabat-AI also enables Indosat’s AI chatbot to answer queries in the Indonesian language for various citizen and resident services. A person could ask about processes for updating their national identification card, as well as about tax rates, payment procedures, deductions and more.

In addition, a government-led forum is developing trustworthy AI frameworks tailored to Indonesian values for the safe, responsible development of artificial intelligence and related policies.

Looking forward, Indosat and NVIDIA plan to deploy AI-RAN technologies that can reach even broader audiences using AI over wireless networks.

Learn more about NVIDIA-powered AI infrastructure for telcos.



Source link

Continue Reading

AI Research

Silicon Valley eyes a governance-lite gold rush

Published

on



Andreessen Horowitz has had enough of Delaware and is moving a unit’s incorporation out west



Source link

Continue Reading

Trending