Connect with us

Tools & Platforms

AI enters the classroom as law schools prep students for a tech-driven practice

Published

on


When it comes to using artificial intelligence in legal education and beyond, the key is thoughtful integration.

“Think of it like a sandwich,” said Dyane O’Leary, professor at Suffolk University Law School. “The student must be the bread on both sides. What the student puts in, and how the output is assessed, matters more than the tool in the middle.”

Suffolk Law is taking a forward-thinking approach to integrating generative AI into legal education starting with requiring an AI course for all first-year students to equip them to use AI, understand it and critique it as future lawyers.

O’Leary, a long-time advocate for legal technology, said there is a need to balance foundational skills with exposure to cutting-edge tools.

“Some schools are ignoring both ends of the AI sandwich,” she said. “Others don’t have the resources to do much at the upper level.”

Professor Dyane O’Leary, director of Suffolk University Law School’s Legal Innovation & Technology Center, teaches a generative AI course in which students assess the ethics of AI in the legal context and, after experimentation, assess the strengths and weaknesses of various AI tools for a range of legal tasks.

One major initiative at Suffolk Law is the partnership with Hotshot, a video-based learning platform used by top law firms, corporate lawyers and litigators.

“The Hotshot content is a series of asynchronous modules tailored for 1Ls,” O’Leary said, “The goal is not for our students to become tech experts but to understand the usage and implication of AI in the legal profession.”

The Hotshot material provides a practical introduction to large language models, explains why generative AI differs from tools students are used to, and uses real-world examples from industry professionals to build credibility and interest.

This structured introduction lays the groundwork for more interactive classroom work when students begin editing and analyzing AI-generated legal content. Students will explore where the tool succeeded, where it failed and why.

“We teach students to think critically,” O’Leary said. “There needs to be an understanding of why AI missed a counterargument or produced a junk rule paragraph.”

These exercises help students learn that AI can support brainstorming and outlining but isn’t yet reliable for final drafting or legal analysis.

Suffolk Law is one of several law schools finding creative ways to bring AI into the classroom — without losing sight of the basics. Whether it’s through required 1L courses, hands-on tools or new certificate programs, the goal is to help students think critically and stay ready for what’s next.

Proactive online learning

Case Western Reserve University School of Law has also taken a proactive step to ensure that all its students are equipped to meet the challenge. In partnership with Wickard.ai, the school recently launched a comprehensive AI training program, making it a mandatory component for the entire first-year class.

“We knew AI was going to change things in legal education and in lawyering,” said Jennifer Cupar, professor of lawyering skills and director of the school’s Legal Writing, Leadership, Experiential Learning, Advocacy, and Professionalism program. “By working with Wickard.ai, we were able to offer training to the entire 1L class and extend the opportunity to the rest of the law school community.”

The program included pre-class assignments, live instruction, guest speakers and hands-on exercises. Students practiced crafting prompts and experimenting with various AI platforms. The goal was to familiarize students with tools such as ChatGPT and encourage a thoughtful, critical approach to their use in legal settings.

Oliver Roberts, CEO and co-founder of Wickard.ai, led the sessions and emphasized the importance of responsible use.

While CWRU Law, like many law schools, has general prohibitions against AI use in drafting assignments, faculty are encouraged to allow exceptions and to guide students in exploring AI’s capabilities responsibly.

“This is a practice-readiness issue,” Cupar said. “Just like Westlaw and Lexis changed legal research, AI is going to be part of legal work going forward. Our students need to understand it now.”

Balanced approach

Starting with the Class of 2025, Washington University School of Law is embedding generative AI instruction into its first-year Legal Research curriculum. The goal is to ensure that every 1L student gains fluency in both traditional legal research methods and emerging AI tools.

Delivered as a yearlong, one-credit course, the revamped curriculum maintains a strong emphasis on core legal research fundamentals, including court hierarchy, the distinction between binding and persuasive authority, primary and secondary sources and effective strategies for researching legislative and regulatory history.

WashU Law is integrating AI as a tool to be used critically and effectively, not as a replacement for human legal reasoning.

Students receive hands-on training in legal-specific generative AI platforms and develop the skills needed to evaluate AI-generated results, detect hallucinated or inaccurate content, and compare outcomes with traditional research methods.

“WashU Law incorporates AI while maintaining the basics of legal research,” said Peter Hook,associate dean. “By teaching the basics, we teach the skills necessary to evaluate whether AI-produced legal research results are any good.”

Stefanie Lindquist, dean of WashU Law, said this balanced approach preserves the rigor and depth that legal employers value.

“The addition of AI instruction further sharpens that edge by equipping students with the ability to responsibly and strategically apply new technologies in a professional context,” Lindquist said.

Forward-thinking vision

Drake University Law School has launched a new AI Law Certificate Program for J.D. students.

The program is a response to the growing need for legal professionals who understand both the promise and complexity of AI.

Designed for completion during a student’s second and third years, the certificate program emphasizes interdisciplinary collaboration, drawing on expertise from across Drake Law School’s campus, including computer science, art and the Institute for Justice Reform & Innovation.

Students will engage with advanced topics such as machine vision and trademark law, quantum computing and cybersecurity, and the broader ethical and regulatory challenges posed by AI.

Roscoe Jones, Jr., dean of Drake Law School, said the AI Law Certificate empowers students to lead at the intersection of law and technology, whether in private practice, government, nonprofit, policymaking or academia.

“Artificial Intelligence is not just changing industries; it’s reshaping governance, ethics and the very framework of legal systems,” he said. 

Simulated, but realistic

Suffolk Law has also launched an online platform that allows students to practice negotiation skills with AI bots programmed to simulate the behavior of seasoned attorneys.

“They’re not scripted. They’re human-like,” she said. “Sometimes polite, sometimes bananas. It mimics real negotiation.”

These interactive experiences in either text or voice mode allow students to practice handling the messiness of legal dialogue, which is an experience hard to replicate with static casebooks or classroom hypotheticals.

Unlike overly accommodating AI assistants, these bots shift tactics and strategies, mirroring the adaptive nature of real-world legal negotiators.

Another tool on the platform supports oral argument prep. Created by Suffolk Law’s legal writing team in partnership with the school’s litigation lab, the AI mock judge engages students in real-time argument rehearsals, asking follow-up questions and testing their case theories.

“It’s especially helpful for students who don’t get much out of reading their outline alone,” O’Leary said. “It makes the lights go on.”

O’Leary also emphasizes the importance of academic integrity. Suffolk Law has a default policy that prohibits use of generative AI on assignments unless a professor explicitly allows it. Still, she said the policy is evolving.

“You can’t ignore the equity issues,” she said, pointing to how students often get help from lawyers in the family or paid tutors. “To prohibit [AI] entirely is starting to feel unrealistic.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Apple’s Top AI Engineer Leaves for Meta with Jaw-Dropping Pay

Published

on


Apple’s Top AI Engineer Leaves for Meta

Key Highlights:

  • Apple’s top AI leader, Ruoming Pang, is leaving for Meta with a multi-million dollar salary offer.
  • Pang led a large team working on AI models powering Siri and other Apple features.
  • Several other Apple AI engineers are expected to follow him to Meta or other companies.
  • Meta is investing heavily in AI and aggressively hiring top talent to compete with rivals.

Apple has lost one of its top artificial intelligence (AI) executives, Ruoming Pang, who led the company’s important AI models team. He is now joining Meta, the company behind Facebook, Instagram, and WhatsApp.

Ruoming Pang was a key engineer managing Apple’s foundation models, which help power many AI features in Apple devices, such as Siri and Apple Intelligence. He joined Apple in 2021 from Google’s parent company, Alphabet. Now, Meta has offered him a very attractive salary package worth tens of millions of dollars per year, which led to his decision to leave Apple.

This move highlights the fierce competition among big tech companies like Apple, Meta, Google, and OpenAI to hire the best AI talent. Meta has been aggressively recruiting AI experts recently, including people from OpenAI and other startups, to build advanced AI systems they call “superintelligence.”

At Apple, Ruoming Pang was in charge of a team of about 100 people working on large language models — the technology behind many AI-powered apps. Recently, Apple announced it would open these models for third-party app developers to create new AI-based iPhone and iPad apps.

However, Apple’s AI team has been facing challenges and internal changes. Some engineers are reportedly planning to leave, and the company is considering using AI technology from outside companies like OpenAI or Anthropic to improve Siri, its voice assistant.

Apple’s AI efforts have also seen leadership changes, with some teams moving away from Ruoming Pang’s group. Now, Zhifeng Chen will lead the foundation models team, with a new structure of managers to help run the work.

Apple is still investing heavily in AI, with top executives like Craig Federighi and Mike Rockwell focusing on new AI features. But losing a top leader like Pang shows how competitive and difficult it is for Apple to keep pace in the fast-moving AI field.

Meta, led by CEO Mark Zuckerberg, is spending billions on AI and making it a top priority. Zuckerberg has personally been involved in recruiting top AI engineers to build the future of AI at Meta.

In summary, Apple’s loss of Ruoming Pang to Meta is a major sign of the ongoing “war for AI talent” among tech giants. It remains to be seen how Apple will respond and strengthen its AI efforts going forward.





Source link

Continue Reading

Tools & Platforms

Analysis: Renewables missing out on AI investment boom despite fuelling the technology – Business Green

Published

on



Analysis: Renewables missing out on AI investment boom despite fuelling the technology  Business Green



Source link

Continue Reading

Tools & Platforms

Major Threat or Just the Next Tech Thing?

Published

on


Story Highlights

  • U.S. adults divided over whether AI poses a novel technology threat
  • Majority do foresee AI taking important tasks away from humans
  • Most say they will avoid embracing AI as long as possible

WASHINGTON, D.C. — As artificial intelligence transitions from abstraction to reality, U.S. adults are evenly divided on its implications for humankind. Forty-nine percent say AI is “just the latest in a long line of technological advancements that humans will learn to use to improve their lives and society,” while an equal proportion say it is “very different from the technological advancements that came before, and threatens to harm humans and society.”

Despite this split assessment, a clear majority (59%) say AI will reduce the need for humans to perform important or creative tasks, while just 38% believe it will mostly handle mundane tasks, freeing humans to do higher-impact work.

And perhaps reflecting AI’s potential to diminish human contributions, 64% plan to resist using it in their own lives for as long as possible rather than quickly embracing it (35%).

###Embeddable###

Majorities Expect AI to Eclipse the Telephone, Internet in Changing Society

Americans may not be convinced that AI poses a threat to humanity, but majorities foresee it having a bigger impact on society than did several major technological advancements of the past century.

Two-thirds (66%) say AI will surpass robotics in societal influence, and more than half say it will exceed the impact of the internet (56%), the computer (57%) and the smartphone (59%). Just over half (52%) think AI will have more impact than the telephone did when it was introduced.

###Embeddable###

Familiarity Breeds Comfort?

Americans’ perceptions of the impact AI will have on society don’t differ much by gender, age or other characteristics. Most demographic groups are closely split over whether AI is just the next technological thing versus a novel threat. But attitudes vary significantly by people’s exposure to AI.

Seventy-one percent of daily users of generative AI (programs like ChatGPT and Microsoft Copilot that can create new content, such as text, images and music) say AI is just another technological advancement. By contrast, only 35% of those who never use generative AI agree.

This 36-percentage-point gap contrasts with smaller differences between users and nonusers of other AI applications in confidence that AI can be harnessed for good. There is a 27-point difference between users and nonusers of virtual assistants (like Amazon Alexa and Apple Siri) in their view that AI will benefit humans. And there are roughly 20-point differences in this endorsement of AI between users and nonusers of personalized content (such as apps that make movie and product recommendations) and smart devices (like robotic vacuums and fitness trackers).

###Embeddable###

Personalized Content Now Routine; Generative AI Still Novel

ChatGPT reportedly became the fastest-growing app ever, after it was launched publicly in November 2022. However, adoption of generative AI, generally, among U.S. adults is still sparse relative to other types of AI. Less than a third of U.S. adults currently report using generative AI tools either daily or weekly. About a quarter use them less frequently than that, while 41% don’t use them at all.

At the same time, more than four in 10 adults say they use voice recognition/virtual assistants (45%) or smart devices (41%) at least weekly. And nearly two-thirds (65%) report frequent use of personalized content.

###Embeddable###

Demographic Gaps Greatest for Generative AI Adoption

The broad adoption of personalized content is reflected in the relative uniformity of its use across demographic groups. The same is true for virtual assistants and smart devices, except that — possibly reflecting their expense — the use of smart devices is greater among upper- than middle- and lower-income groups and, relatedly, among college-educated and employed adults. Smart devices are also the one technology used more often by women (44%) than men (37%).

On the other hand, there are sizable differences by age, education, employment and gender in the use of generative AI.

  • The rate of using generative AI daily or weekly is highest among 18- to 29-year-olds (43%) and lowest among seniors (19%).
  • There is an eight-point difference by gender, with more men (36%) than women (28%) using it. However, the gender gap is greater among adults 50 and older than among those 18 to 49.
  • Employed adults (37%) are nearly twice as likely as nonworking adults (20%) to be regularly using generative AI.

###Embeddable###

Bottom Line

While Americans are split over whether AI is a routine step in the evolution of technology or a unique threat, most expect it to diminish the need for human creativity and are hesitant to fully adopt it personally. For now, positive views of AI are closely linked with people’s experience with it, rather than their personal demographics. The implication is that as usage expands, acceptance may follow.

Stay up to date with the latest insights by following @Gallup on X and on Instagram.

Learn more about how the Gallup Panel works.

###Embeddable###





Source link

Continue Reading

Trending