AI Research
Can artificial intelligence be trusted for legal research? Lessons from Ayinde

Artificial intelligence (AI) has been thought of as the solution to everything for the past couple of years. The use of AI in legal disputes presents a positive opportunity, but issues have already been spotted, resulting in various guidelines and rules being issued.
The first thing to understand is that not everything is generative AI (genAI), meaning you ask it for something, and it generates an outcome. The generative product of AI is what has caused most concern in legal proceedings. We have already seen embarrassment in US cases where, in May 2023, a New York lawyer used an AI tool, ChatGPT, for legal research, but the results included made-up cases. This resulted in Judge Castel demanding that the legal team explain itself. In Canada too there have been issues, in April this year, in the case of Hussein v. Canada, the lawyer apparently relied on a tailored legal genAI tool called Visto.ai designed for Canadian immigration cases, and still ended up using fake cases in the submissions, but also then cited real cases making the wrong points. Canada requires the disclosure of the use of AI, but that has not stopped these mistakes.
The judge ruling on the case commented:
“[39] I do not accept that this is permissible. The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law.”
“[40] In fact, the two case hallucinations were not the full extent of the failure of the artificial intelligence product used. It also hallucinated the proper test for the admission on judicial review of evidence not before the decision-maker and cited, as authority, a case which had no bearing on the issue at all. To be clear, this was not a situation of a stray case with a variation of the established test but, rather, an approach similar to the test for new evidence on appeal. As noted above, the case relied upon in support of the wrong test (Cepeda-Gutierrez) has nothing to do with the issue. I note in passing that the case comprises 29 paragraphs and would take only a few minutes to review.”
The use of AI in English courts
English courts received guidance for judges on the use of AI in December 2023. One of the key warnings to judges was that the ‘[C]urrently available LLMs appear to have been trained on material published on the internet. Their ‘view’ of the law is often based heavily on US law, although some do purport to be able to distinguish between that and English law.’
The English courts do not ban the use of AI, but both judges and lawyers have been told clearly that they are responsible for the material which is produced in their name. In England, AI can be used, but the human user is responsible for its accuracy and is responsible for any errors. In November 2023, the Solicitors Regulation Authority (SRA) issued guidance on AI use, then the Bar Council published guidance, in January 2024, on the use of AI. More recently in 2025, the Chartered Institute of Arbitrators has also issued guidance. The common threat to all is that humans must check that the output is correct!
England has been looking to technology and potentially AI in regard to its use in helping with cases for some time. Back in March 2004, algorithm-based digital decision making was already working behind the scenes in the justice system. Lord Justice Birss explained then that algorithm-based decision making was already solving a problem at the online money claims service, with an algorithmic formula applied where defendants accept a debt, but ask for time to pay. Birss LJ went on to say that looking to the future: “AI used properly has the potential to enhance the work of lawyers and judges enormously.” In October 2024, the Lord Chancellor and Secretary of State for Justice, Shabana Mahmood MP, and, the Lady Chief Justice, The Right Honourable the Baroness Carr of Walton-on-the-Hill, also echoed the potential of technology for the future of the courts and justice system. Nothing is perfect though and, alongside accuracy, there is concern about ethics in regard to the use of AI. On ethical AI and international standards, the UK promotes the Ethical AI Initiative, and the international standard, specifically ISO 42001, the AI management system. This may be adopted as a standard in regard to English procedure at some point. In April 2025, the judiciary updated their guidance to judicial office holders on the use of AI. Yet, all this guidance seems to be unheeded: The need for a clearer understanding of the rules and policing of lawyers is clear following the case of Ayinde, R (On the Application Of) v London Borough of Haringey [2025] EWHC 1383 (Admin) (6 June 2025).
The case
This important case was heard by the President of the King’s bench and Mr Justice Johnson. It brought together two cases that involved the use by lawyers of genAI to produce written legal arguments or witness statements which were not then checked, so that false information ended up before the court.
The facts of these cases raise serious issues about the competence and conduct of the lawyers concerned. Some consider that this also means that we need to consider the adequacy of the relevant training, supervision and regulation, although perhaps it easier to ask: how do we know that a lawyer takes their duties seriously?
The importance of the case is perhaps best summarised by this quote from the judgment:
“Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained. As Dias J said when referring the case of Al-Haroun to this court, the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.”
Clearly, lawyers need to keep in mind their existing duties, whether barristers or solicitors. The SRA’s Rules of Conduct mean solicitors are under a duty not to mislead the court or others including by omission (Rule 1.4). They are under a duty only to make assertions or put forward statements, representations or submissions to the court or others which are properly arguable (Rule 2.4). Further relevant rules include the duty not to waste the court’s time (Rule 2.6) and the duty to draw the court’s attention to relevant cases, which are likely to have a material effect on the outcome (Rule 2.7). Most importantly a solicitor remains accountable for the work (Rule 3.5).
The court has a range of sanctions if a lawyer breaches the rules, including the public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and a referral to the police, if the court thinks that is warranted.
Placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to contempt. The problem with this is what level of knowledge is needed and how will it be proven?
In the case of Ayinde, it was submitted that the threshold for contempt proceedings was not met, because counsel did not know the citations were false.
If the judiciary are truly worried about the misuse of AI, they may need an adjustment to the rule, so that the state of knowledge is not relevant and, alternatively, it is strict liability that is the stick that ensures lawyers do not fail to check the AI work product.
The background to Ayinde
The case originated with a claim by Mr Ayinde represented by the Haringey Law Centre. Mr Victor Amadigwe, a solicitor, is the Chief Executive of the Haringey Law Centre. Ms Sunnelah Hussain is a paralegal working under his supervision and Ms Sarah Forey was the barrister instructed. The grounds for judicial review were settled and signed by Ms Forey, but she used AI and made inaccurate legal submissions, misstating the statutory provisions of the Housing Act 1996, and cited five fictitious cases. This came to light due to the defendant’s legal team writing and asking for copies of the cases they could not find. The errors were compounded when poor explanations for the errors were given, in that they did not explain them. In a hearing on wasted costs, Mr Justice Ritchie said:
“I do not consider that it was fair or reasonable to say that the erroneous citations could easily be explained and then to refuse to explain them.”
Mr Justice Ritchie then found that the behaviour of Ms Forey and the Haringey Law Centre had been improper and unreasonable and negligent. Before the Administrative Court, Ms Foley denied using AI tools to assist her with legal research and submitted that she was aware that AI is not a reliable source. She then accepted that she acted negligently and apologised to the court.
Ms Hussain and Mr Amadigwe of the Haringey Law Centre also apologised to the court. Mr Amadigwe explained it was not their practice to check what counsel produced.
The Administrative Court’s findings
The Court were far from impressed with the explanations by Ms Foley, saying:
“Ms Forey refuses to accept that her conduct was improper. She says that the underlying legal principles for which the cases were cited were sound, and that there are other authorities that could be cited to support those principles. She went as far as to state that these other authorities were the authorities that she ‘intended’ to cite (a proposition which, if taken literally, is not credible). An analogy was drawn with the mislabelling of a tin where the tin, in fact, contains the correct product. In our judgment, this entirely misses the point and shows a worrying lack of insight. We do not accept that a lack of access to textbooks or electronic subscription services within chambers, if that is the position, provides anything more than marginal mitigation. Ms Forey could have checked the cases she cited by searching the National Archives’ caselaw website or by going to the law library of her Inn of Court. We regret to say that she has not provided to the court a coherent explanation for what happened.”
The Court went on to find the threshold for contempt was met. Though they then determined that counsel’s junior nature and having already being publicly admonished and reported to the Bar Standards Board was sufficient sanction. Mr Amadigwe was referred to the SRA and Ms Hussain, as a paralegal under supervision, faced no punishment.
The background to the Al-Haroun case
Mr Al-Haroun sought substantial damages for alleged breaches of a financing agreement. His solicitor was Mr Hussain from Primus Solicitors. The defendants were Qatar National Bank and QNB Capital. The claimant’s lawyer sought to challenge an extension of time for the defence and their submissions caused concern before the court. Mrs Justice Dias dismissed the challenge due to these reasons:
“The court is deeply troubled and concerned by the fact that in the course of correspondence with the court and in the witness statements of both Mr Al-Haroun and Mr Hussain, reliance is placed on numerous authorities, many of which appear to be either completely fictitious or which, if they exist at all, do not contain the passages supposedly quoted from them, or do not support the propositions for which they are cited: see the attached schedule of references prepared by one of the court’s judicial assistants. It goes without saying that this is a matter of the utmost seriousness. Primus Solicitors are regulated by the SRA and Mr Hussain is accordingly an officer of the court. As such, both he and they are under a duty not to mislead or attempt to mislead the court, either by their own acts or omissions or by allowing or being complicit in the act or omissions of their client. The administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported. Putting before the court supposed ‘authorities’ which do not in fact exist, or which are not authority for the propositions relied upon is prima facie only explicable as either a conscious attempt to mislead or an unacceptable failure to exercise reasonable diligence to verify the material relied upon. For these reasons, the court considers it appropriate to refer the case for further consideration under the Hamid jurisdiction, pending which all questions of costs are reserved.”
The submissions included 18 made up cases and many other cases did not support the points submitted. Mr Al-Haroun admitted that the citations were generated using publicly available AI tools, legal search engines and online sources. He then submitted that he had complete but misplaced confidence in the authenticity of the material that he put before the court. Mr Hussain admitted his witness statement contained citations to non-existent authorities, based on his client’s research; an interesting approach for a solicitor to rely on their client for legal research. Mr Hussain reported himself to the SRA, and the Court rightly said it was concerned with the conduct of the lawyers, not the clients, in this case. The Court found that Mr Hussain, and Primus Solicitors, allowed a “lamentable failure to comply with the basic requirement to check the accuracy of material” and emphasised that lawyers have a professional responsibility to ensure the accuracy of materials. The Court then left the regulator to deal with Mr Hussain.
Conclusion
The future is clear: AI will be part of the administration of justice. What is also clear is that there is proper concern about its use. There likely needs to be procedural requirements for the disclosure of its use and generally users must own the outcomes as their responsibility. Clearly AI in our justice system can only be safely used with proper human oversight and responsibility.
AI Research
Artificial intelligence, rising tuition discussed by educational leaders at UMD

DULUTH, Minn. (Northern News Now) – A panel gathered at UMD’s Weber Music Hall Friday to discuss the future of higher education.
The conversation touched on heavy topics like artificial intelligence, rising tuition costs, and how to provide the best education possible for students.
Almost 100 people listened to conversations on the current climate of college campuses, including UMD Associate Dean of the Swenson College of Engineering and Science Erin Sheets.
“We’re in a unique and challenging time, with respect to the federal landscape and state landscape,” said Sheets.
The three panelists addressed current national changes, including rising tuition costs and budget cuts.
“That is going to be a structural shift we really are going to have to pay attention to, if we want to continue to commit for all students to have the opportunity to attend college,” said panelist and Managing Director of Waverly Foundation Lande Ajose.
Last year alone, the University of Minnesota system was hit with a 3% budget cut on top of a loss of $22 million in federal grants. This resulted in a 6.5% tuition increase for students.
Even with changing resources, the panel emphasized helping students prepare for the future, which they said includes the integration of AI.
“As students graduate, if they are not AI fluent, they are not competitive for jobs,” said panelist and University of Minnesota President Rebecca Cunningham.
Research shows that the use of AI in the workplace has doubled in the last two years to 40%.
While AI continues to grow every day, both students and faculty are learning to use it and integrate it into their curriculum.
“These are tools, they are not a substitute for a human being. You still need the critical thinking, you need the ethical guidelines, even more so,” said Sheets.
Following the panel, UMD hosted a campus-wide celebration to mark the inauguration of Chancellor Charles Nies.
Click here to download the Northern News Now app or our Northern News Now First Alert weather app.
Copyright 2025 Northern News Now. All rights reserved.
AI Research
AI startup CEO who has hired several Meta engineers says: Reason AI researchers are leaving Meta is, as founder Mark Zuckerberg said, “Biggest risk is not taking …”

Shawn Shen, co-founder and CEO of the AI startup Memories.ai, has stated that some researchers are leaving Facebook-parent Meta due to frequent company reorganisations and a desire to take on bigger risks. Shen, who left Meta himself last year, notes that constant changes in managers and goals can be frustrating for researchers, leading them to seek opportunities at other companies and startups. Shen’s startup, which builds AI to understand visual data, recently announced a plan to offer up to $2 million compensation packages to researchers from top tech companies. Memories.ai has already hired Chi-Hao Wu, a former Meta research scientist, as its chief AI officer. Shen also referenced a statement from Meta CEO Mark Zuckerberg who earlier said that the “the biggest risk is not taking any risks.”
What startup CEO Shen said about AI researchers leaving Meta
In an interview with Business Insider, Shen said: “Meta is constantly doing reorganizations. Your manager and your goals can change every few months. For some researchers, it can be really frustrating and feel like a waste of time. So yes, I think that’s a driver for people to leave Meta and join other companies, especially startups.There’s other reasons people might leave. I think the biggest one is what Mark (Zuckerberg) has said: ‘In an age that’s evolving so fast, the biggest risk is not taking any risks. So why not do that and potentially change the world as part of a trillion-dollar company?’We have already hired Eddy Wu, our Chief AI Officer who was my manager’s manager at Meta. He’s making a similar amount to what we’re offering the new people. He was on their generative AI team, which is now Meta Superintelligence Labs. And we are already talking to a few other people from MSL and some others from Google DeepMind.”
What Shen said about hiring Meta AI researchers for his startup
Shen noted that he’s offering AI researchers who are leaving Meta pay packages of $2 million to work with his startup. He said: “It’s because of the talent war that was started by Mark Zuckerberg. I used to work at Meta, and I speak with my former colleagues often about this. When I heard about their compensation packages, I was shocked — it’s really in the tens of millions range. But it shows that in this age, AI researchers who make the best models and stand at the frontier of technology are really worth this amount of money. We’re building an AI model that can see and remember just like humans. The things that we are working on are very niche. So we are looking for people who are really, really good at the whole field of understanding video data.”He even explained that his company is prioritising hires who are willing to take more equity than cash, allowing it to preserve its financial runway. These recruits will be treated as founding members rather than employees, with compensation split between cash and equity depending on the individual, Shen added.Over the next six months, the AI startup is planning to add three to five people, followed by another five to ten within a year, alongside efforts to raise additional funding. Shen believes that investing heavily in talent will strengthen, not hinder, future fundraising.
AI Research
AARP warns of “Grandparent Scams”

MONTGOMERY, Ala. (WSFA) – While artificial intelligence is rapidly transforming our world, a troubling trend shows scammers using it to steal from seniors, specifically grandparents.
You’ve probably heard the phrase ‘seeing is believing’ your whole life. But in an age of artificial intelligence, the turn of phrase doesn’t exactly stand the test of time. When it’s in the wrong hands, this new technology can make our senior citizens, who didn’t grow up in the digital age, a vulnerable population.
“One of the ways we see that being done is with what’s known as the grandparent scam,” Jamie Harding, AARP of Alabama Communications director, said. “The grandparent scam is basically, it usually happens late at night, they’re asleep, and someone calls them purporting to be their grandchild, they’re in trouble, they need money immediately.”
However, it isn’t actually their grandchild on the other end of the phone. Scammers have used AI technology to replicate the sound of their grandchild’s voice to try to take money.
“These are very sophisticated international crime rings, and they have access to a lot of very sophisticated technology,” Harding said.
To protect your family from these scams, Harding suggests having a code word that every member of your family knows so you can be sure it’s actually your loved one calling.
She also advises you not to answer phone calls from unknown numbers and to keep your personal information off the internet.
Not reading this story on the WSFA News App? Get news alerts FASTER and FREE in the Apple App Store and the Google Play Store!
Copyright 2025 WSFA. All rights reserved.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi