Tools & Platforms
ABA ethics opinion addresses jury selection discrimination from consultants and AI technology
Ethics
ABA ethics opinion addresses jury selection discrimination from consultants and AI technology
When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility. (Image from Shutterstock)
When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility.
That also applies to client directives, as well as guidance from jury consultants or AI software, according to Formal Opinion 517, published Wednesday.
Such conduct violates Model Rule 8.4(g), which prohibits harassment and discrimination in the practice of law based on “race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status.”
A lawyer does not violate Rule 8.4(g) by exercising peremptory challenges on a discriminatory basis where not forbidden by other law, according to the opinion.
The U.S. Supreme Court explained such conduct violates the Equal Protection Clause of the 14th Amendment in Batson v. Kentucky (1986) and J.E.B. v. Alabama ex rel. T.B. (1994). In Batson, a lawyer struck a series of Black jurors in a criminal trial. In J.E.B., a lawyer struck a series of males in a paternity child support action.
The ethics opinion addresses when a Batson-type violation also constitutes professional misconduct under Rule 8.4(g).
Seemingly, if a lawyer commits such a violation, the lawyer also runs afoul of Rule 8.4(g). After all, in both settings the lawyer has engaged in a form of racial discrimination.
“Striking prospective jurors on discriminatory bases in violation of substantive law governing juror selection is not legitimate advocacy. Conduct that has been declared illegal by the courts or a legislature cannot constitute “legitimate advocacy,” the ethics opinion states.
However, Comment [5] to the model rule provides that a trial judge finding a Batson violation alone does not establish running afoul of 8.4.
The comment, according to the ethics opinion, gives “guidance on the evidentiary burden in a disciplinary proceeding.”
For example, in a disciplinary hearing a lawyer may be able to offer “a more fulsome explanation” for why they struck certain jurors. Furthermore, there is a “higher burden of proof” in lawyer discipline proceedings.
The ethics opinion also explains that a lawyer violates Rule 8.4(g) only if they know or reasonably should have known that the exercise of the peremptory challenges were unlawful. The lawyer may genuinely believe they had legitimate, nondiscriminatory reasons for striking certain jurors—such as their age, whether they paid attention during the jury selection process or something else.
According to the opinion, the question then centers on “whether ‘a lawyer of reasonable prudence and competence’ would have known that the challenges were impermissible.”
Also, the opinion addresses the difficult question of what if a client or jury consultant offers nondiscriminatory reasons for striking certain jurors and the lawyer follows such advice. Here, a reasonably competent and prudent lawyer should know whether the client or jury consultant’s reasons were pretextual or were legitimate.
Additionally, the opinion addresses a scenario where an AI-generated program ranks prospective jurors and applies those rankings, unknown to the lawyer, in a discriminatory manner. Lawyers should use “due diligence to acquire a general understanding of the methodology employed by the juror selection program,” the opinion states.
A July 9 ABA press release is here.
Write a letter to the editor, share a story tip or update, or report an error.
Tools & Platforms
AI enters the classroom as law schools prep students for a tech-driven practice
When it comes to using artificial intelligence in legal education and beyond, the key is thoughtful integration.
“Think of it like a sandwich,” said Dyane O’Leary, professor at Suffolk University Law School. “The student must be the bread on both sides. What the student puts in, and how the output is assessed, matters more than the tool in the middle.”
Suffolk Law is taking a forward-thinking approach to integrating generative AI into legal education starting with requiring an AI course for all first-year students to equip them to use AI, understand it and critique it as future lawyers.
O’Leary, a long-time advocate for legal technology, said there is a need to balance foundational skills with exposure to cutting-edge tools.
“Some schools are ignoring both ends of the AI sandwich,” she said. “Others don’t have the resources to do much at the upper level.”
One major initiative at Suffolk Law is the partnership with Hotshot, a video-based learning platform used by top law firms, corporate lawyers and litigators.
“The Hotshot content is a series of asynchronous modules tailored for 1Ls,” O’Leary said, “The goal is not for our students to become tech experts but to understand the usage and implication of AI in the legal profession.”
The Hotshot material provides a practical introduction to large language models, explains why generative AI differs from tools students are used to, and uses real-world examples from industry professionals to build credibility and interest.
This structured introduction lays the groundwork for more interactive classroom work when students begin editing and analyzing AI-generated legal content. Students will explore where the tool succeeded, where it failed and why.
“We teach students to think critically,” O’Leary said. “There needs to be an understanding of why AI missed a counterargument or produced a junk rule paragraph.”
These exercises help students learn that AI can support brainstorming and outlining but isn’t yet reliable for final drafting or legal analysis.
Suffolk Law is one of several law schools finding creative ways to bring AI into the classroom — without losing sight of the basics. Whether it’s through required 1L courses, hands-on tools or new certificate programs, the goal is to help students think critically and stay ready for what’s next.
Proactive online learning
Case Western Reserve University School of Law has also taken a proactive step to ensure that all its students are equipped to meet the challenge. In partnership with Wickard.ai, the school recently launched a comprehensive AI training program, making it a mandatory component for the entire first-year class.
“We knew AI was going to change things in legal education and in lawyering,” said Jennifer Cupar, professor of lawyering skills and director of the school’s Legal Writing, Leadership, Experiential Learning, Advocacy, and Professionalism program. “By working with Wickard.ai, we were able to offer training to the entire 1L class and extend the opportunity to the rest of the law school community.”
The program included pre-class assignments, live instruction, guest speakers and hands-on exercises. Students practiced crafting prompts and experimenting with various AI platforms. The goal was to familiarize students with tools such as ChatGPT and encourage a thoughtful, critical approach to their use in legal settings.
Oliver Roberts, CEO and co-founder of Wickard.ai, led the sessions and emphasized the importance of responsible use.
While CWRU Law, like many law schools, has general prohibitions against AI use in drafting assignments, faculty are encouraged to allow exceptions and to guide students in exploring AI’s capabilities responsibly.
“This is a practice-readiness issue,” Cupar said. “Just like Westlaw and Lexis changed legal research, AI is going to be part of legal work going forward. Our students need to understand it now.”
Balanced approach
Starting with the Class of 2025, Washington University School of Law is embedding generative AI instruction into its first-year Legal Research curriculum. The goal is to ensure that every 1L student gains fluency in both traditional legal research methods and emerging AI tools.
Delivered as a yearlong, one-credit course, the revamped curriculum maintains a strong emphasis on core legal research fundamentals, including court hierarchy, the distinction between binding and persuasive authority, primary and secondary sources and effective strategies for researching legislative and regulatory history.
WashU Law is integrating AI as a tool to be used critically and effectively, not as a replacement for human legal reasoning.
Students receive hands-on training in legal-specific generative AI platforms and develop the skills needed to evaluate AI-generated results, detect hallucinated or inaccurate content, and compare outcomes with traditional research methods.
“WashU Law incorporates AI while maintaining the basics of legal research,” said Peter Hook,associate dean. “By teaching the basics, we teach the skills necessary to evaluate whether AI-produced legal research results are any good.”
Stefanie Lindquist, dean of WashU Law, said this balanced approach preserves the rigor and depth that legal employers value.
“The addition of AI instruction further sharpens that edge by equipping students with the ability to responsibly and strategically apply new technologies in a professional context,” Lindquist said.
Forward-thinking vision
Drake University Law School has launched a new AI Law Certificate Program for J.D. students.
The program is a response to the growing need for legal professionals who understand both the promise and complexity of AI.
Designed for completion during a student’s second and third years, the certificate program emphasizes interdisciplinary collaboration, drawing on expertise from across Drake Law School’s campus, including computer science, art and the Institute for Justice Reform & Innovation.
Students will engage with advanced topics such as machine vision and trademark law, quantum computing and cybersecurity, and the broader ethical and regulatory challenges posed by AI.
Roscoe Jones, Jr., dean of Drake Law School, said the AI Law Certificate empowers students to lead at the intersection of law and technology, whether in private practice, government, nonprofit, policymaking or academia.
“Artificial Intelligence is not just changing industries; it’s reshaping governance, ethics and the very framework of legal systems,” he said.
Simulated, but realistic
Suffolk Law has also launched an online platform that allows students to practice negotiation skills with AI bots programmed to simulate the behavior of seasoned attorneys.
“They’re not scripted. They’re human-like,” she said. “Sometimes polite, sometimes bananas. It mimics real negotiation.”
These interactive experiences in either text or voice mode allow students to practice handling the messiness of legal dialogue, which is an experience hard to replicate with static casebooks or classroom hypotheticals.
Unlike overly accommodating AI assistants, these bots shift tactics and strategies, mirroring the adaptive nature of real-world legal negotiators.
Another tool on the platform supports oral argument prep. Created by Suffolk Law’s legal writing team in partnership with the school’s litigation lab, the AI mock judge engages students in real-time argument rehearsals, asking follow-up questions and testing their case theories.
“It’s especially helpful for students who don’t get much out of reading their outline alone,” O’Leary said. “It makes the lights go on.”
O’Leary also emphasizes the importance of academic integrity. Suffolk Law has a default policy that prohibits use of generative AI on assignments unless a professor explicitly allows it. Still, she said the policy is evolving.
“You can’t ignore the equity issues,” she said, pointing to how students often get help from lawyers in the family or paid tutors. “To prohibit [AI] entirely is starting to feel unrealistic.”
Tools & Platforms
Microsoft pushes billions at AI education for the masses • The Register
After committing more than $13 billion in strategic investments to OpenAI, Microsoft is splashing out billions more to get people using the technology.
On Wednesday, Redmond announced a $4 billion donation of cash and technology to schools and non-profits over the next five years. It’s branding this philanthropic mission as Microsoft Elevate, which is billed as “providing people and organizations with AI skills and tools to thrive in an AI-powered economy.” It will also start the AI Economy Institute (AIEI), a so-called corporate think tank stocked with academics that will be publishing research on how the workforce needs to adapt to AI tech.
The bulk of the money will go toward AI and cloud credits for K-12 schools and community colleges, and Redmond claims 20 million people will “earn an in-demand AI skilling credential” under the scheme, although Microsoft’s record on such vendor-backed certifications is hardly spotless.
“Working in close coordination with other groups across Microsoft, including LinkedIn and GitHub, Microsoft Elevate will deliver AI education and skilling at scale,” said Brad Smith, president and vice chair of Microsoft Corporation, in a blog post. “And it will work as an advocate for public policies around the world to advance AI education and training for others.”
It’s not an entirely new scheme – Redmond already had its Microsoft Philanthropies and Tech for Social Impact charitable organizations, but they are now merging into Elevate. Smith noted Microsoft has already teamed up with North Rhine-Westphalia in Germany to train students on AI, and says similar partnerships across the US education system will follow.
Microsoft is also looking to recruit teachers to the cause.
On Tuesday, Microsoft, along with Anthropic and OpenAI, said it was starting the National Academy for AI Instruction with the American Federation of Teachers to train teachers in AI skills and to pass them on to the next generation. The scheme has received $23 million in funding from the tech giants spread over five years, and aims to train 400,000 teachers at training centers across the US and online.
“AI holds tremendous promise but huge challenges—and it’s our job as educators to make sure AI serves our students and society, not the other way around,” said AFT President Randi Weingarten in a canned statement.
“The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.”
Meanwhile, the AIEI will sponsor and convene researchers to produce publications, including policy briefs and research reports, on applying AI skills in the workforce, leveraging a global network of academic partners.
Hopefully they can do a better job of it than Redmond’s own staff. After 9,000 layoffs from Microsoft earlier this month, largely in the Xbox division, Matt Turnbull, an executive producer at Xbox Game Studios Publishing, went viral with a spectacularly tone-deaf LinkedIn post (now removed) to former staff members offering AI prompts “to help reduce the emotional and cognitive load that comes with job loss.” ®
Tools & Platforms
We Have Proof AI Is Improving CX
Most of us want to see real benefits from AI, and for businesses, the contact center has long been one of the best use cases. The challenges are very real, and there is lots of incentive to find better solutions. With customer service now being framed as CX, the problem set becomes strategic for the business, so it’s bigger than just the contact center.
As such, the stakes are higher now, and technology investments are no longer about making incremental improvements. CX leaders – and business leaders – need to be ready to re-think everything they do in the name of serving customers. The more rooted the contact center is in legacy technology, the more transformational the change needs to be. With the right approach, the outcomes can meet this brief, especially in terms of elevating CX, and making customers feel more valued than ever before.
With all the hype, contact centers are being led to believe that AI is the right – and perhaps only – approach for them to follow. But, how can technology decision-makers know for sure? There are real results in the market, and to illustrate, I have some takeaways from a recent vendor event.
NiCE Interactions 2025 – Making it Real
All CX vendors aspire to deliver that right approach, but with AI evolving so fast, it’s difficult to tell who is getting real results. Any vendor can show tangible, AI-driven outcomes around standard KPIs such as time to answer or handle times, but these metrics are largely about automation.
While valuable, it’s not transformational, and what CX leaders should really be looking for are business-level outcomes that reflect a more strategic approach to AI. Automation is part of that, but AI needs to also address business drivers such as customer retention, marketing efficacy, agent empowerment, operational efficiency, compliance, data security, etc.
When thinking along those lines, the bar becomes higher for enterprises when partnering with CX vendors. They need a richer sense of what’s real, not just for the incremental benefits, but also the bigger picture where AI helps CX align with their business strategy.
That may be asking a lot, but I saw solid evidence of that at the recent NiCE Interactions 2025 event in Las Vegas. Aside from showcasing major strides with AI and their CXone platform from last year’s Interactions event, this was the first time most of us saw new-ish CEO Scott Russell. The company also just did a branding refresh to reflect their “NiCE world” product promise, which behind the scenes is largely powered by AI.
While agentic AI is all the rage right now, it’s just one of many touch points along the CX spectrum for AI. Before moving on to customer successes, it’s worth noting how extensively AI is embedded throughout NiCE’s CXone platform, as this is a big part of making that product promise real.
A few examples of these AI CX touchpoints include Topic AI, where unstructured transcripts are turned into structured data to enrich their LLMs; using Copilot to augment agent performance; Agent Builder to automate workflows; and Mpower Desk to make all tasks visible on one screen in real time, integrating front and back-office operations on a single platform.
Real Results from Real Customers
Impressive as all this is, the best proof points came directly from the customers themselves. Over the two days of sessions, we heard from six Tier-1 customers, each of whom explained how AI aligned with the company’s broader business priorities and initiatives while also driving better CX outcomes.
Tangible CX outcomes were cited, but just as important, we heard how these AI capabilities are helping them understand and meet the expectations of today’s customers – something all of them were struggling to do before. Here are two select examples, and note how they are from very different types of businesses; NiCE’s capabilities are not specific to a particular vertical, meaning that all CX leaders should be thinking along these lines for AI.
Arun Chandra, Disney
The first customer success story was from Arun Chandra, SVP of CX at Disney. I would argue that Disney sets the bar for how successfully brands tell stories, and the company’s narrative here was about customer journey being a form of storytelling. Chandra talked about the importance of upholding the brand in everything they do, and being the best at everything they do. As such, when it came to using AI for CX, Disney needed a partner with the best technology, the best AI safeguards, and the ability to do both at scale.
Knowing their AI deployment with NiCE would be safe, Disney has been able to deploy a mix of human and virtual agents for seamless CX to deliver a more modern form of customer service.
In terms of supporting the Disney brand, this new approach for using AI with CX also aligns well with Disney being a leading adopter of cinematic technology for movie making. While no AI metrics were shared here, the impact on a strategic level is real, and is a great example for how some customers are looking for more than tangible outcomes when choosing a CX partner for AI.
Brendan Mulryan, H&R Block
As VP of CX, Mulryan explained tax returns could benefit from modernizing their approach to customer service. With a nationwide network of retail locations to support millions of customers, a consumer-facing financial services company is an ideal use case for AI. Not only must call routing be intelligent enough to route inquiries to the nearest location, but the customer support must scale up for traffic peaks during tax season that are unlike most any other type of business.
Most of H&R Block’s inquiries are telephony-centric, simple tasks like setting appointments to come in to meet with a tax preparer. While not looking to reinvent the customer journey, H&R Block’s core need was to uplevel their IVA, especially during tax season. With NiCE Autopilot, the company was able to automate these inquiries, along with providing an SMS offramp in cases where customers preferred to use messaging instead of voice.
Not only does this improvement make the process of tax filing more efficient for everyone, but a better CX also strengthens customer loyalty. In terms of operational efficiency, Mulryan reported a 63% containment rate with NiCE. I don’t think he shared what that level was previously, but it clearly was an improvement, where almost two thirds of all calls could be fully automated.
That alone might be enough to justify deploying AI, but as with Disney – and other customer success stories – the strategic drivers were major considerations in choosing a technology partner. For H&R Block, this would be partnering with a vendor that could support their Next 2030 plan, and how improving CX is more than just serving the customer well for this year’s return.
The bigger goal is to “empower financial freedom for the client through trust and technology”, where the focus is on the lifetime value of each customer. Technology is key to building trust in any business, but especially here, when dealing with highly-sensitive personal financial data. Brendan cited a data point to support a high level of trust, showing that 78% intend to return after deploying with NiCE. That’s another good metric to show the real impact of AI on CX.
On a more strategic level, he talked about the need for AI to derive new insights from customer interactions to allow both agents and tax preparers to provide more personalized forms of service. This drives new value, not just for identifying new areas to provide additional services during tax time, but throughout the entire year. As such, similar to Disney, H&R Block had specific objectives, as well as transformational aspirations for CX, making this more than just an exercise in modernizing self-service.
The Takeaway for Enterprise Technology Leaders
Across these customer success stories – along with others during the breakout sessions – NiCE is clearly delivering real results with AI. As with any showcase event for customers, prospects and partners, there was also AI hype. Yes, the hype is real, but so are the results, and for the time being, the two go hand-in-hand with vendor messaging.
At Interactions, attendees did get a taste of some real benchmarks with AI, so it would be a mistake for CX leaders to hold off on AI until there’s more proof, and/or for the hype to die down. To that, I would cite Bryan Mulryan’s parting message about how the “cost of inaction” is high, especially with AI changing so quickly.
Along with that, he noted the need to rethink notions of ROI with these new technologies. The “right approach” for CX leaders is about achieving transformational outcomes with AI, and not just looking for operational efficiencies. Performance metrics do validate AI as being real, but so do the transformational outcomes that go beyond the numbers. That was a common theme across all the customer success stories, and when considering partners for AI and CX, the reality check needs to be a mix of both.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle