Tools & Platforms
AI enters the classroom as law schools prep students for a tech-driven practice
When it comes to using artificial intelligence in legal education and beyond, the key is thoughtful integration.
“Think of it like a sandwich,” said Dyane O’Leary, professor at Suffolk University Law School. “The student must be the bread on both sides. What the student puts in, and how the output is assessed, matters more than the tool in the middle.”
Suffolk Law is taking a forward-thinking approach to integrating generative AI into legal education starting with requiring an AI course for all first-year students to equip them to use AI, understand it and critique it as future lawyers.
O’Leary, a long-time advocate for legal technology, said there is a need to balance foundational skills with exposure to cutting-edge tools.
“Some schools are ignoring both ends of the AI sandwich,” she said. “Others don’t have the resources to do much at the upper level.”
One major initiative at Suffolk Law is the partnership with Hotshot, a video-based learning platform used by top law firms, corporate lawyers and litigators.
“The Hotshot content is a series of asynchronous modules tailored for 1Ls,” O’Leary said, “The goal is not for our students to become tech experts but to understand the usage and implication of AI in the legal profession.”
The Hotshot material provides a practical introduction to large language models, explains why generative AI differs from tools students are used to, and uses real-world examples from industry professionals to build credibility and interest.
This structured introduction lays the groundwork for more interactive classroom work when students begin editing and analyzing AI-generated legal content. Students will explore where the tool succeeded, where it failed and why.
“We teach students to think critically,” O’Leary said. “There needs to be an understanding of why AI missed a counterargument or produced a junk rule paragraph.”
These exercises help students learn that AI can support brainstorming and outlining but isn’t yet reliable for final drafting or legal analysis.
Suffolk Law is one of several law schools finding creative ways to bring AI into the classroom — without losing sight of the basics. Whether it’s through required 1L courses, hands-on tools or new certificate programs, the goal is to help students think critically and stay ready for what’s next.
Proactive online learning
Case Western Reserve University School of Law has also taken a proactive step to ensure that all its students are equipped to meet the challenge. In partnership with Wickard.ai, the school recently launched a comprehensive AI training program, making it a mandatory component for the entire first-year class.
“We knew AI was going to change things in legal education and in lawyering,” said Jennifer Cupar, professor of lawyering skills and director of the school’s Legal Writing, Leadership, Experiential Learning, Advocacy, and Professionalism program. “By working with Wickard.ai, we were able to offer training to the entire 1L class and extend the opportunity to the rest of the law school community.”
The program included pre-class assignments, live instruction, guest speakers and hands-on exercises. Students practiced crafting prompts and experimenting with various AI platforms. The goal was to familiarize students with tools such as ChatGPT and encourage a thoughtful, critical approach to their use in legal settings.
Oliver Roberts, CEO and co-founder of Wickard.ai, led the sessions and emphasized the importance of responsible use.
While CWRU Law, like many law schools, has general prohibitions against AI use in drafting assignments, faculty are encouraged to allow exceptions and to guide students in exploring AI’s capabilities responsibly.
“This is a practice-readiness issue,” Cupar said. “Just like Westlaw and Lexis changed legal research, AI is going to be part of legal work going forward. Our students need to understand it now.”
Balanced approach
Starting with the Class of 2025, Washington University School of Law is embedding generative AI instruction into its first-year Legal Research curriculum. The goal is to ensure that every 1L student gains fluency in both traditional legal research methods and emerging AI tools.
Delivered as a yearlong, one-credit course, the revamped curriculum maintains a strong emphasis on core legal research fundamentals, including court hierarchy, the distinction between binding and persuasive authority, primary and secondary sources and effective strategies for researching legislative and regulatory history.
WashU Law is integrating AI as a tool to be used critically and effectively, not as a replacement for human legal reasoning.
Students receive hands-on training in legal-specific generative AI platforms and develop the skills needed to evaluate AI-generated results, detect hallucinated or inaccurate content, and compare outcomes with traditional research methods.
“WashU Law incorporates AI while maintaining the basics of legal research,” said Peter Hook,associate dean. “By teaching the basics, we teach the skills necessary to evaluate whether AI-produced legal research results are any good.”
Stefanie Lindquist, dean of WashU Law, said this balanced approach preserves the rigor and depth that legal employers value.
“The addition of AI instruction further sharpens that edge by equipping students with the ability to responsibly and strategically apply new technologies in a professional context,” Lindquist said.
Forward-thinking vision
Drake University Law School has launched a new AI Law Certificate Program for J.D. students.
The program is a response to the growing need for legal professionals who understand both the promise and complexity of AI.
Designed for completion during a student’s second and third years, the certificate program emphasizes interdisciplinary collaboration, drawing on expertise from across Drake Law School’s campus, including computer science, art and the Institute for Justice Reform & Innovation.
Students will engage with advanced topics such as machine vision and trademark law, quantum computing and cybersecurity, and the broader ethical and regulatory challenges posed by AI.
Roscoe Jones, Jr., dean of Drake Law School, said the AI Law Certificate empowers students to lead at the intersection of law and technology, whether in private practice, government, nonprofit, policymaking or academia.
“Artificial Intelligence is not just changing industries; it’s reshaping governance, ethics and the very framework of legal systems,” he said.
Simulated, but realistic
Suffolk Law has also launched an online platform that allows students to practice negotiation skills with AI bots programmed to simulate the behavior of seasoned attorneys.
“They’re not scripted. They’re human-like,” she said. “Sometimes polite, sometimes bananas. It mimics real negotiation.”
These interactive experiences in either text or voice mode allow students to practice handling the messiness of legal dialogue, which is an experience hard to replicate with static casebooks or classroom hypotheticals.
Unlike overly accommodating AI assistants, these bots shift tactics and strategies, mirroring the adaptive nature of real-world legal negotiators.
Another tool on the platform supports oral argument prep. Created by Suffolk Law’s legal writing team in partnership with the school’s litigation lab, the AI mock judge engages students in real-time argument rehearsals, asking follow-up questions and testing their case theories.
“It’s especially helpful for students who don’t get much out of reading their outline alone,” O’Leary said. “It makes the lights go on.”
O’Leary also emphasizes the importance of academic integrity. Suffolk Law has a default policy that prohibits use of generative AI on assignments unless a professor explicitly allows it. Still, she said the policy is evolving.
“You can’t ignore the equity issues,” she said, pointing to how students often get help from lawyers in the family or paid tutors. “To prohibit [AI] entirely is starting to feel unrealistic.”
Tools & Platforms
Employers struggle to identify real candidates
India’s job sector is undergoing a major transformation, with excessive dependencies on Artificial Intelligence by freshers becoming a complex challenge for recruiters in the country. The AI era has become a double-edged sword for companies–while productivity has improved, over-reliance on AI technology has impacted employees’ critical thinking, originality, and problem-solving traits.
Last month, US-based Massachusetts Institute of Technology (MIT) revealed shocking details about people who use OpenAI’s ChatGPT tool significantly in their routine. The study concluded that ChatGPT users have lower brain engagement and consistently “underperformed” at the neural, linguistic, and behavioural level. Notably, Mary Meeker’s research on AI usage trends discovered that India tops the chart with the highest ChatGPT mobile app users globally, at 14 percent.
Mita Brahma, HR Head at NIIT, said that employees’ over-dependency on AI is a massive threat for recruiters that is looming in the job sector currently. “Employees’ foundational cognitive and collaborative skills are not developed due to AI dependencies,” she added, “This can lead to tech-dependent superficial capabilities that don’t translate into real-world performance”.
Arindam Mukherjee, co-founder of the skilling platform NextLeap, said he has observed a surge in fake resumes that are ATS-compliant and do not give a true picture of the candidate’s real skills.
“AI agents can now apply for jobs on your behalf. AI resume builders can make your resume look like you are the best candidate, AI tools can complete the take-home assignment in minutes, and AI interview co-pilots can run in the background, assisting you in your virtual interview”.
Anil Ethanur, Co-founder, Xpheno – a specialist staffing firm, underscored that enterprises are not just facing a challenge of ‘wrong hires’, but also ‘wrong drops’ in the AI-era. Ethanur said that there are a lot of ‘false positives’ candidates in the AI ecosystem, who are disguised as ‘ideal fit’ employees. “The noise of and from AI-enhanced resumes is a significant dilution of the quality of recruitment processes and also causes cost-time-&-resource wastage for employers,” according to Ethanur. Besides, AI tools have also been noted to cause ‘false negatives’ where candidates with a good fit get wrongly knocked out as low fits. “The chances enterprises incurring higher costs of ‘wrong hires’ are much higher in the current stage of the AI era,” he added.
Pranay Kale, Chief Revenue & Growth Officer, foundit, said that AI tools like ChatGPT, GitHub Copilot, and AI-enhanced resume builders have become second nature to younger job seekers. Therefore, Kale said that, “The Line between AI-assisted performance and actual capability is becoming increasingly blurred”.
While AI has crossed industries and functions, experts told Storyboard18 that sectors where creativity and judgment are central should be cautious when they onboard a new employee, particularly with 0-5 years of experience, into their organization. For instance, fields where content creation is a key task – research and development, publishing, media, advertisement, and journalism- should select the candidates carefully, Brahma said.
“In these fields, an overdependence on generative AI tools like ChatGPT without domain depth can lead to poor judgment, flawed insights, or even compliance risks. Hence, hiring in these sectors must include rigorous domain-specific assessments, ethical reasoning tests, and real-world simulations,” she said.
According to TeamLease Shantanu Rooj, industries that rely heavily on analytical thinking, ethical reasoning, and real-time problem-solving must be more deliberate and rigorous during hiring. Sectors such as consulting, financial services, legal advisory, and research demand professionals who can interpret nuance, deal with ambiguity, and make judgment calls based on context – all areas where AI currently falls short. Rooj added that education sector can also take a hit if the recruitment of teachers is not done correctly. “Teachers and professors who are overly dependent on AI tools risk diluting the learning experience rather than enriching it”.
Experts unanimously agreed that the hiring process should measure independent cognition, contextual reasoning, and original problem-solving skills that AI alone cannot supply when hiring a professional.
Dr Sangeeta Chhabra, Co-Founder & Executive Director, AceCloud, added, “leaders must go beyond assessing technical expertise and focus on attributes such as problem solving, adaptability, and the ability to collaborate effectively with intelligent systems to filter the right talent”.
Ankit Aggarwal, founder & CEO of Unstop, suggested that founders look beyond the resumes and give students real-time problems from solving different brands to help them showcase their ideas and problem-solving abilities.
Aggarwal said that “hackathons, coding challenges, case study competitions, quizzes,” can help in testing the real skills of the employees.
‘Dangers of over-reliance on AI’
According to Kale, the automation bias could contribute to structural unemployment and skill atrophy in certain sectors. Kale says that AI may erode critical thinking, problem-solving, and creativity, especially among early-career professionals. “If individuals lean too heavily on AI to automate outputs or make decisions without understanding the ‘why’ behind them, we risk developing a workforce that is skilled in using tools but lacks foundational cognitive depth,” Kale argued.
In contrast, Ethanur said that AI addiction will not lead to higher unemployment rates. He projected that a significant change in the job market will be driven by the mainstream arrival of AI in low to mid-cognitive functions. “The phase when this redefinition happens on a large scale will have to coincide with the arrival of sufficient AI-enabled and AI-dependent talent pools into mainstream employment”.
Rooj upheld that the next decade will not be defined by AI replacing people but by people who can meaningfully work with AI. For instance, roles like “prompt engineering, AI oversight, ethical data governance, and human-AI interface management” will gain traction.
“AI should empower, not diminish, the human edge, and it’s up to all of us to ensure we strike that balance,” Chhabra noted.
Tools & Platforms
NCS launches S$130M AI transformation initiative across Asia Pacific focused on Intelligentisation, Internationalisation, and Inspiration
Unveils Sunshine.AI suite, forges six major technology partnerships, and builds an AI-enabled workforce of over 10,000 to catalyse transformation
SINGAPORE, July 10, 2025 /PRNewswire/ — In an era where AI is becoming fundamental to drive transformation, NCS today announced a S$130 million investment over three years to lead change across Asia Pacific (APAC).
At its annual flagship Impact Forum, attended by more than 1,200 leaders and technology practitioners from APAC, NCS outlined a vision where technology transcends borders, best practices are shared across diverse markets, and AI serves to advance communities rather than replace human capability.
NCS launched Sunshine.AI, a suite of AI tools and accelerators that transform how organisations develop intelligent solutions. NCS also announced strategic partnerships with leading global technology players and strengthened collaboration with research institutions, building a dynamic community of AI practitioners that elevates NCS’ AI capabilities and regional leadership.
“With AI reshaping industries as it becomes more accessible than before, we’re partnering government agencies and enterprises to help them harness the best of AI not just for efficiency gains but to advance communities,” said NCS CEO Ng Kuo Pin.
“Expanding our APAC footprint and doubling down on collaborations with technology leaders are still important in a bifurcated world. The investment we are making over the next three years and our blueprint anchored by three pillars – Intelligentisation, Internationalisation and Inspiration – will better enable our people and clients to create new business outcomes and build a resilient, innovative future with AI,” he added.
Intelligentisation: More than Digitalisation – A Blueprint for AI-Powered Transformation
Intelligentisation is a structured, holistic approach to embedding intelligence into the core of business processes, government workflows, and human experiences. This means designing AI systems not as standalone tools, but as integral components of decision-making, service delivery and operational flow. Central to this are NCS-proprietary tools, accelerators and methodologies.
From assessment to design to implementation, the NCS Sunshine suite of tools is tailored for developers, IT operations teams and corporate users. It comprises:
-
Sunshine.Coder, an AI coding assistant that supports language conversion, test generation, and code analysis
-
Sunshine.Operations, an AIOps platform designed to automate incident triage, system log analysis, and operational task flows
-
Sunshine.Productivity, a suite of tools that enhance day-to-day tasks such as summarisation, content retrieval, and secure document handling.
Tools & Platforms
Chinese AI stocks to extend DeepSeek-driven run as Beijing counts on growth boost
“AI will probably become a key driver for China’s modernisation,” said Yao Pei, an analyst at Huachuang Securities, in a report this month. “There are lots of catalysts for AI, and AI is expected to penetrate into every industry,” notably electronics, computing and media, Yao said.
Unlike the US, which had an edge in AI computing, China was focused on efficiency – emphasising revenue generated by AI-enabled offerings and cost savings achieved through high productivity, the US investment bank said.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Funding & Business6 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%