Connect with us

Tools & Platforms

Tech Philosophy and AI Opportunity – Stratechery by Ben Thompson

Published

on


One of the most paradoxical aspects of AI is that while it is hailed as the route to abundance, the most important financial outcomes have been about scarcity. The first and most obvious example has been Nvidia, whose valuation has skyrocketed while demand for its chips continues to outpace supply:

Another scarce resource that has come to the forefront over the last few months is AI talent; the people who are actually building and scaling the models are suddenly being paid more than professional athletes, and it makes sense:

  • The potential financial upside from “winning” in AI are enormous
  • Outputs are somewhat measurable
  • The work-to-be-done is the same across the various companies bidding for talent

It’s that last point that is fairly unique in tech history. While great programmers have always been in high demand, and there have been periods of intense competition in specific product spaces, over the past few decades tech companies have been franchises, wherein their market niches have been fairly differentiated: Google and search, Amazon and e-commerce, Meta and social media, Microsoft and business applications, Apple and devices, etc. This reality meant that the company mattered more than any one person, putting a cap on individual contributor salaries.

AI, at least to this point, is different: in the long run it seems likely that there will be dominant product companies in various niches, but as long as the game is foundational models, then everyone is in fact playing the same game, which elevates the bargaining power of the best players. It follows, then, that the team they play for is the team that pays the most, through some combination of money and mission; by extension, the teams that are destined to lose are the ones who can’t or won’t offer enough of either.

Apple’s Reluctance

It’s that last point I’m interested in; I’m not in position to judge the value of any of the players changing teams, but the teams are worth examining. Consider Meta and Apple and the latest free agent signing; from Bloomberg:

Apple Inc.’s top executive in charge of artificial intelligence models is leaving for Meta Platforms Inc., another setback in the iPhone maker’s struggling AI efforts. Ruoming Pang, a distinguished engineer and manager in charge of the company’s Apple foundation models team, is departing, according to people with knowledge of the matter. Pang, who joined Apple from Alphabet Inc. in 2021, is the latest big hire for Meta’s new superintelligence group, said the people, who declined to be named discussing unannounced personnel moves.

To secure Pang, Meta offered a package worth tens of millions of dollars per year, the people said. Meta Chief Executive Officer Mark Zuckerberg has been on a hiring spree, bringing on major AI leaders including Scale AI’s Alexandr Wang, startup founder Daniel Gross and former GitHub CEO Nat Friedman with high compensation. Meta has also hired Yuanzhi Li, a researcher from OpenAI, and Anton Bakhtin, who worked on Claude at Anthropic PBC, according to other people with knowledge of the matter. Last month, it hired a slew of other OpenAI researchers. Meta, later on Monday, confirmed it is hiring Pang. Apple, Pang, OpenAI and Anthropic didn’t respond to requests for comment.

That Apple is losing AI researchers is a surprise only in that they had researchers worth hiring; after all, this is the company who already implicitly signaled its AI reluctance in terms of that other scarce resource: Nvidia chips. Again from Bloomberg:

Former Chief Financial Officer Luca Maestri’s conservative stance on buying GPUs, the specialized circuits essential to AI, hasn’t aged well either. Under Cook, Apple has used its market dominance and cash hoard to shape global supply chains for everything from semiconductors to the glass for smartphone screens. But demand for GPUs ended up overwhelming supply, and the company’s decision to buy them slowly — which was in line with its usual practice for emerging technologies it isn’t fully sold on — ended up backfiring. Apple watched as rivals such as Amazon and Microsoft Corp. bought much of the world’s supply. Fewer GPUs meant Apple’s AI models were trained all the more slowly. “You can’t magically summon up more GPUs when the competitors have already snapped them all up,” says someone on the AI team.

It may seem puzzling that the company that in its 2024 fiscal year generated $118 billion in free cash flow would be so cheap, but Apple’s reluctance makes sense from two perspectives.

First, the potential impact of AI on Apple’s business prospects, at least in the short term, are fairly small: we still need devices on which to access AI, and Apple continues to own the high end of devices (there is, of course, long-term concern about AI obviating the need for a smartphone, or meaningfully differentiating an alternative platform like Android). That significantly reduces the financial motivation for Apple to outspend other companies in terms of both GPUs and researchers.

Second, AI, at least some of the more fantastical visions painted by companies like Anthropic, is arguably counter to Apple’s entire ethos as a company.

Tech’s Two Philosophies

It was AI, at least the pre-LLM version of it, that inspired me in 2018 to write about Tech’s Two Philosophies; one was represented by Google and Facebook (now Meta):

In Google’s view, computers help you get things done — and save you time — by doing things for you. Duplex was the most impressive example — a computer talking on the phone for you — but the general concept applied to many of Google’s other demonstrations, particularly those predicated on AI: Google Photos will not only sort and tag your photos, but now propose specific edits; Google News will find your news for you, and Maps will find you new restaurants and shops in your neighborhood. And, appropriately enough, the keynote closed with a presentation from Waymo, which will drive you…

Zuckerberg, as so often seems to be the case with Facebook, comes across as a somewhat more fervent and definitely more creepy version of Google: not only does Facebook want to do things for you, it wants to do things its chief executive explicitly says would not be done otherwise. The Messianic fervor that seems to have overtaken Zuckerberg in the last year, though, simply means that Facebook has adopted a more extreme version of the same philosophy that guides Google: computers doing things for people.

The other philosophy was represented by Apple and Microsoft:

Earlier this week, while delivering Microsoft’s Build conference keynote, CEO Satya Nadella struck a very different tone…This is technology’s second philosophy, and it is orthogonal to the other: the expectation is not that the computer does your work for you, but rather that the computer enables you to do your work better and more efficiently. And, with this philosophy, comes a different take on responsibility. Pichai, in the opening of Google’s keynote, acknowledged that “we feel a deep sense of responsibility to get this right”, but inherent in that statement is the centrality of Google generally and the direct culpability of its managers. Nadella, on the other hand, insists that responsibility lies with the tech industry collectively, and all of us who seek to leverage it individually.

This second philosophy, that computers are an aid to humans, not their replacement, is the older of the two; its greatest proponent — prophet, if you will — was Microsoft’s greatest rival, and his analogy of choice was, coincidentally enough, about transportation as well. Not a car, but a bicycle:

I remember reading an article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet earth, how many kilocalories did they expand to get from point A to point B, and the condor came in the top of the list, surpassed everything else, and humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.

But somebody there had the imagination to test the efficiency of a human riding a bicycle, and a human riding a bicycle blew away the condor all the way off the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes. And so for me a computer has always been a bicycle of the mind, something that takes us far beyond our inherent abilities. I think we’re just at the early stages of this tool, very early stages, and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes. I think that’s nothing compared to what’s coming in the next 100 years.

We are approximately forty years on from that clip, and Steve Jobs’ prediction that enormous changes were still to come is obviously prescient: mobile and the Internet have completely transformed the world, and AI is poised to make those impacts look like peanuts. What I’m interested in in the context of this Article, however, is the interplay between business opportunity — or risk — and philosophy. Apple’s position is here:

In this view the company’s conservatism makes sense: Apple doesn’t quite see the upside of AI for their business (and isn’t overly concerned about the downsides), and its bias towards tools means that AI apps on iPhones are sufficient; Apple might be an increasingly frustrating platform steward, but they are at their core a platform company, and apps on their platform are delivering Apple users AI tools.

This same framework also explains Meta’s aggressiveness. First, the opportunity is huge, as I documented last fall in Meta’s AI Abundance (and, for good measure, there is risk as well, as time — the ultimate scarcity for an advertising-based business — is spent using AI). Second, Meta’s philosophy is that computers do things for you:

Given this graph, is it any surprise that Meta hired away Apple’s top AI talent?

I’m Feeling Lucky

Another way to think about how companies are approaching AI is through the late Professor Clayton Christensen’s discussion around sustaining versus disruptive innovation. From an Update last month after the news of Meta’s hiring spree first started making waves:

The other reason to believe in Meta versus Google comes down to the difference between disruptive and sustaining innovations. The late Professor Clayton Christensen described the difference in The Innovator’s Dilemma:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character. An important finding revealed in this book is that rarely have even the most radically difficult sustaining technologies precipitated the failure of leading firms.

Occasionally, however, disruptive technologies emerge: innovations that result in worse product performance, at least in the near-term. Ironically, in each of the instances studied in this book, it was disruptive technology that precipitated the leading firms’ failure. Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

The question of whether generative AI is a sustaining or disruptive innovation for Google remains uncertain two years after I raised it. Obviously Google has tremendous AI capabilities both in terms of infrastructure and research, and generative AI is a sustaining innovation for its display advertising business and its cloud business; at the same time, the long-term questions around search monetization remain as pertinent as ever.

Meta, however, does not have a search business to potentially disrupt, and a whole host of ways to leverage generative AI across its business; for Zuckerberg and company I think that AI is absolutely a sustaining technology, which is why it ultimately makes sense to spend whatever is necessary to get the company moving in the right direction.

The problem with this analysis is the Google part: how do you square the idea that AI is disruptive to Google with the fact that they are investing just has heavily as everyone else, and in fact started far earlier than everyone else? I think the answer goes back to Google’s founding, and the “I’m Feeling Lucky” button:

While that button is now gone from Google.com, I don’t think it was an accident that it persisted long after it was even usable (instant search results meant that by 2010 you didn’t even have a chance to click it); “I’m Feeling Lucky” was a statement of purpose. From 2016’s Google and the Limits of Strategy:

In yesterday’s keynote, Google CEO Sundar Pichai, after a recounting of tech history that emphasized the PC-Web-Mobile epochs I described in late 2014, declared that we are moving from a mobile-first world to an AI-first one; that was the context for the introduction of the Google Assistant.

It was a year prior to the aforementioned iOS 6 that Apple first introduced the idea of an assistant in the guise of Siri; for the first time you could (theoretically) compute by voice. It didn’t work very well at first (arguably it still doesn’t), but the implications for computing generally and Google specifically were profound: voice interaction both expanded where computing could be done, from situations in which you could devote your eyes and hands to your device to effectively everywhere, even as it constrained what you could do. An assistant has to be far more proactive than, for example, a search results page; it’s not enough to present possible answers: rather, an assistant needs to give the right answer.

This is a welcome shift for Google the technology; from the beginning the search engine has included an “I’m Feeling Lucky” button, so confident was Google founder Larry Page that the search engine could deliver you the exact result you wanted, and while yesterday’s Google Assistant demos were canned, the results, particularly when it came to contextual awareness, were far more impressive than the other assistants on the market. More broadly, few dispute that Google is a clear leader when it comes to the artificial intelligence and machine learning that underlie their assistant.

The problem — apparent even then — was the conflict with Google’s business model:

A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in).

Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky” button guaranteed that the search in question would not make Google any money. After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention. Google Assistant has the exact same problem: where do the ads go?

What I articulated in that Article was Google’s position on this graph:

AI is the ultimate manifestation of “I’m Feeling Lucky”; Google has been pursuing AI because that is why Page and Brin started the company in the first place; business models matter, but they aren’t dispositive, and while that may mean short-term difficulties for Google, it is a reason to be optimistic that the company will figure out AI anyways.

Microsoft, OpenAI, and Anthropic

Frameworks like this are useful, but not fully explanatory; I think this particular one goes a long way towards contextualizing the actions of Apple, Meta, and Google, but is much more speculative for some other relevant AI players. Consider Microsoft, which I would place here:

Microsoft doesn’t have any foundational models of note, but has invested heavily in OpenAI; its most important AI product are its various Copilots, which are indeed a bet on the “tool” philosophy. The question, as I laid out last year in Enterprise Philosophy and the First Wave of AI, is whether rank-and-file employees want Microsoft’s tools:1

Notice, though, how this aligned with the Apple and Microsoft philosophy of building tools: tools are meant to be used, but they take volition to maximize their utility. This, I think, is a challenge when it comes to Copilot usage: even before Copilot came out employees with initiative were figuring out how to use other AI tools to do their work more effectively. The idea of Copilot is that you can have an even better AI tool — thanks to the fact it has integrated the information in the “Microsoft Graph” — and make it widely available to your workforce to make that workforce more productive.

To put it another way, the real challenge for Copilot is that it is a change management problem: it’s one thing to charge $30/month on a per-seat basis to make an amazing new way to work available to all of your employees; it’s another thing entirely — a much more difficult thing — to get all of your employees to change the way they work in order to benefit from your investment, and to make Copilot Pages the “new artifact for the AI age”, in line with the spreadsheet in the personal computer age.

This tension explains the anecdotes in this Bloomberg article last month:

OpenAI’s nascent strength in the enterprise market is giving its partner and biggest investor indigestion. Microsoft salespeople describe being caught flatfooted at a time when they’re under pressure to get Copilot into as many customers’ hands as possible. The behind-the-scenes dogfight is complicating an already fraught relationship between Microsoft and OpenAI…It’s unclear whether OpenAI’s momentum with corporations will continue, but the company recently said it has 3 million paying business users, a 50% jump from just a few months earlier. A Microsoft spokesperson said Copilot is used by 70% of the Fortune 500 and paid users have tripled compared with this time last year…

This story is based on conversations with more than two dozen customers and salespeople, many of them Microsoft employees. Most of these people asked not to be named in order to speak candidly about the competition between Microsoft and OpenAI. Both companies are essentially pitching the same thing: AI assistants that can handle onerous tasks — researching and writing; analyzing data — potentially letting office workers focus on thornier challenges. Since both chatbots are largely based on the same OpenAI models, Microsoft’s salesforce has struggled to differentiate Copilot from the much better-known ChatGPT, according to people familiar with the situation.

As long as AI usage relies on employee volition, ChatGPT has the advantage; what is interesting about this observation, however, is that it shows that OpenAI is actually in the same position as Microsoft:

This, by extension, explains why Anthropic is different; the other leading independent foundational lab is clearly focused on agents, not chatbots, i.e. AI that does stuff for you, instead of a tool. Consider the contrast between Cursor and Claude Code: Cursor is an integrated development environment (IDE) that provides the best possible UI for AI-augmented programming; Claude Code, on the other hand, barely bothers with a UI at all. It runs in the terminal, which people put up with because it is the best at one-shotting outputs; this X thread was illuminating:

More generally, I wrote in an Update after the release of Claude 4, which was heavily focused on agentic workloads:

This, by extension, means that Anthropic’s goal is what I wrote about in last fall’s Enterprise Philosophy and the First Wave of AI:

Computing didn’t start with the personal computer, but rather with the replacement of the back office. Or, to put it in rather more dire terms, the initial value in computing wasn’t created by helping Boomers do their job more efficiently, but rather by replacing entire swathes of them completely…Agents aren’t copilots; they are replacements. They do work in place of humans — think call centers and the like, to start — and they have all of the advantages of software: always available, and scalable up-and-down with demand…

Benioff isn’t talking about making employees more productive, but rather companies; the verb that applies to employees is “augmented”, which sounds much nicer than “replaced”; the ultimate goal is stated as well: business results. That right there is tech’s third philosophy: improving the bottom line for large enterprises.

Notice how well this framing applies to the mainframe wave of computing: accounting and ERP software made companies more productive and drove positive business results; the employees that were “augmented” were managers who got far more accurate reports much more quickly, while the employees who used to do that work were replaced. Critically, the decision about whether or not to make this change did not depend on rank-and-file employees changing how they worked, but for executives to decide to take the plunge.

This strikes me as a very worthwhile goal, at least from a business perspective. OpenAI is busy owning the consumer space, while Google and its best-in-class infrastructure and leading models struggles with product; Anthropic’s task is to build the best agent product in the world, including not just state-of-the-art models but all of the deterministic computing scaffolding that actually makes them replacement-level workers. After all, Anthropic’s API pricing may look expensive relative to Google, but it looks very cheap relative to a human salary.

That means that Anthropic shares the upper-right quadrant with Meta:

Again, this is just one framework; there are others. Moreover, the boundaries are fuzzy. OpenAI is working on agentic workloads, for example, and the hyperscalers all benefit from more AI usage, whether user- or agent-driven; Google, meanwhile, is rapidly evolving Search to incorporate generative AI.

At the same time, to go back to the talent question, I don’t think it’s a surprise that Meta appears to be picking off more researchers from OpenAI than from Anthropic: my suspicion is that to the extent mission is a motivator the more likely an AI researcher is to be enticed by the idea of computers doing everything, instead of merely augmenting humans. And, by extension, the incumbent tool-makers may have no choice but to partner with the true believers.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Polimorphic Raises $18.6M as It Beefs Up Public-Sector AI

Published

on


The latest best on public-sector AI involves Polimorphic, which has raised $18.6 million in a Series A funding round led by General Catalyst.

The round also included M13 and Shine.

The company raised $5.6 million in a seed round in late 2023.


New York-based Polimorphic sells such products as artificial intelligence-backed chatbots and search tools, voice AI for calls, constituent relationship management (CRM) and workflow software, and permitting and licensing tech.

The new capital will go toward tripling the company’s sales and engineering staff and building more AI product features.

For instance, that includes the continued development of the voice AI offering, which can now work with live data — a bonus when it comes to utility billing — and even informs callers to animal services which pets might be up for adoption, CEO and co-founder Parth Shah told Government Technology in describing his vision for such tech.

The company also wants to bring more AI to CRM and workflow software to help catch errors on applications and other paperwork earlier than before, Shah said.

“We are more than just a chatbot,” he said.

Challenges of public-sector AI include making sure that public agencies truly understand the technology and are “not just slapping AI on what you already do,” Shah said.

As he sees it, working in governments in such a way has helped Polimorphic to nearly double its customer count every six months. The company has more than 200 public-sector departments at the city, county and state levels using the company’s products, he said — and such growth is among the reasons the company attracted this new round of investment.

The company’s general sales pitch is increasingly familiar to public-sector tech buyers: Software and AI can help agencies deal with “repetitive, manual tasks, including answering the same questions by phone and email,” according to a statement, and help people find civic and bureaucratic information more quickly.

For instance, the company says it has helped customers reduce voicemails by up to 90 percent, with walk-in requests cut by 75 percent. Polimorphic clients include the city of Pacifica, Calif.; Tooele County, Utah; Polk County, N.C.; and the town of Palm Beach, Fla.

The fresh funding also will help the company expand in the company’s top markets, which include Wisconsin, New Jersey, North Carolina, Texas, Florida and California.

The company’s investors are familiar to the gov tech industry. Earlier this year, for example, General Catalyst led an $80 million Series C funding round for Prepared, a public safety tech supplier focused on bringing more assistive AI capabilities to emergency dispatch.

“Polimorphic has the potential to become the next modern system of record for local and state government. Historically, it’s been difficult to drive adoption of these foundational platforms beyond traditional ERP and accounting in the public sector,” said Sreyas Misra, partner at General Catalyst, in the statement. “AI is the jet fuel that accelerates this adoption.”

Thad Rueter writes about the business of government technology. He covered local and state governments for newspapers in the Chicago area and Florida, as well as e-commerce, digital payments and related topics for various publications. He lives in Wisconsin.





Source link

Continue Reading

Tools & Platforms

AI enters the classroom as law schools prep students for a tech-driven practice

Published

on


When it comes to using artificial intelligence in legal education and beyond, the key is thoughtful integration.

“Think of it like a sandwich,” said Dyane O’Leary, professor at Suffolk University Law School. “The student must be the bread on both sides. What the student puts in, and how the output is assessed, matters more than the tool in the middle.”

Suffolk Law is taking a forward-thinking approach to integrating generative AI into legal education starting with requiring an AI course for all first-year students to equip them to use AI, understand it and critique it as future lawyers.

O’Leary, a long-time advocate for legal technology, said there is a need to balance foundational skills with exposure to cutting-edge tools.

“Some schools are ignoring both ends of the AI sandwich,” she said. “Others don’t have the resources to do much at the upper level.”

Professor Dyane O’Leary, director of Suffolk University Law School’s Legal Innovation & Technology Center, teaches a generative AI course in which students assess the ethics of AI in the legal context and, after experimentation, assess the strengths and weaknesses of various AI tools for a range of legal tasks.

One major initiative at Suffolk Law is the partnership with Hotshot, a video-based learning platform used by top law firms, corporate lawyers and litigators.

“The Hotshot content is a series of asynchronous modules tailored for 1Ls,” O’Leary said, “The goal is not for our students to become tech experts but to understand the usage and implication of AI in the legal profession.”

The Hotshot material provides a practical introduction to large language models, explains why generative AI differs from tools students are used to, and uses real-world examples from industry professionals to build credibility and interest.

This structured introduction lays the groundwork for more interactive classroom work when students begin editing and analyzing AI-generated legal content. Students will explore where the tool succeeded, where it failed and why.

“We teach students to think critically,” O’Leary said. “There needs to be an understanding of why AI missed a counterargument or produced a junk rule paragraph.”

These exercises help students learn that AI can support brainstorming and outlining but isn’t yet reliable for final drafting or legal analysis.

Suffolk Law is one of several law schools finding creative ways to bring AI into the classroom — without losing sight of the basics. Whether it’s through required 1L courses, hands-on tools or new certificate programs, the goal is to help students think critically and stay ready for what’s next.

Proactive online learning

Case Western Reserve University School of Law has also taken a proactive step to ensure that all its students are equipped to meet the challenge. In partnership with Wickard.ai, the school recently launched a comprehensive AI training program, making it a mandatory component for the entire first-year class.

“We knew AI was going to change things in legal education and in lawyering,” said Jennifer Cupar, professor of lawyering skills and director of the school’s Legal Writing, Leadership, Experiential Learning, Advocacy, and Professionalism program. “By working with Wickard.ai, we were able to offer training to the entire 1L class and extend the opportunity to the rest of the law school community.”

The program included pre-class assignments, live instruction, guest speakers and hands-on exercises. Students practiced crafting prompts and experimenting with various AI platforms. The goal was to familiarize students with tools such as ChatGPT and encourage a thoughtful, critical approach to their use in legal settings.

Oliver Roberts, CEO and co-founder of Wickard.ai, led the sessions and emphasized the importance of responsible use.

While CWRU Law, like many law schools, has general prohibitions against AI use in drafting assignments, faculty are encouraged to allow exceptions and to guide students in exploring AI’s capabilities responsibly.

“This is a practice-readiness issue,” Cupar said. “Just like Westlaw and Lexis changed legal research, AI is going to be part of legal work going forward. Our students need to understand it now.”

Balanced approach

Starting with the Class of 2025, Washington University School of Law is embedding generative AI instruction into its first-year Legal Research curriculum. The goal is to ensure that every 1L student gains fluency in both traditional legal research methods and emerging AI tools.

Delivered as a yearlong, one-credit course, the revamped curriculum maintains a strong emphasis on core legal research fundamentals, including court hierarchy, the distinction between binding and persuasive authority, primary and secondary sources and effective strategies for researching legislative and regulatory history.

WashU Law is integrating AI as a tool to be used critically and effectively, not as a replacement for human legal reasoning.

Students receive hands-on training in legal-specific generative AI platforms and develop the skills needed to evaluate AI-generated results, detect hallucinated or inaccurate content, and compare outcomes with traditional research methods.

“WashU Law incorporates AI while maintaining the basics of legal research,” said Peter Hook,associate dean. “By teaching the basics, we teach the skills necessary to evaluate whether AI-produced legal research results are any good.”

Stefanie Lindquist, dean of WashU Law, said this balanced approach preserves the rigor and depth that legal employers value.

“The addition of AI instruction further sharpens that edge by equipping students with the ability to responsibly and strategically apply new technologies in a professional context,” Lindquist said.

Forward-thinking vision

Drake University Law School has launched a new AI Law Certificate Program for J.D. students.

The program is a response to the growing need for legal professionals who understand both the promise and complexity of AI.

Designed for completion during a student’s second and third years, the certificate program emphasizes interdisciplinary collaboration, drawing on expertise from across Drake Law School’s campus, including computer science, art and the Institute for Justice Reform & Innovation.

Students will engage with advanced topics such as machine vision and trademark law, quantum computing and cybersecurity, and the broader ethical and regulatory challenges posed by AI.

Roscoe Jones, Jr., dean of Drake Law School, said the AI Law Certificate empowers students to lead at the intersection of law and technology, whether in private practice, government, nonprofit, policymaking or academia.

“Artificial Intelligence is not just changing industries; it’s reshaping governance, ethics and the very framework of legal systems,” he said. 

Simulated, but realistic

Suffolk Law has also launched an online platform that allows students to practice negotiation skills with AI bots programmed to simulate the behavior of seasoned attorneys.

“They’re not scripted. They’re human-like,” she said. “Sometimes polite, sometimes bananas. It mimics real negotiation.”

These interactive experiences in either text or voice mode allow students to practice handling the messiness of legal dialogue, which is an experience hard to replicate with static casebooks or classroom hypotheticals.

Unlike overly accommodating AI assistants, these bots shift tactics and strategies, mirroring the adaptive nature of real-world legal negotiators.

Another tool on the platform supports oral argument prep. Created by Suffolk Law’s legal writing team in partnership with the school’s litigation lab, the AI mock judge engages students in real-time argument rehearsals, asking follow-up questions and testing their case theories.

“It’s especially helpful for students who don’t get much out of reading their outline alone,” O’Leary said. “It makes the lights go on.”

O’Leary also emphasizes the importance of academic integrity. Suffolk Law has a default policy that prohibits use of generative AI on assignments unless a professor explicitly allows it. Still, she said the policy is evolving.

“You can’t ignore the equity issues,” she said, pointing to how students often get help from lawyers in the family or paid tutors. “To prohibit [AI] entirely is starting to feel unrealistic.”





Source link

Continue Reading

Tools & Platforms

Microsoft pushes billions at AI education for the masses • The Register

Published

on


After committing more than $13 billion in strategic investments to OpenAI, Microsoft is splashing out billions more to get people using the technology.

On Wednesday, Redmond announced a $4 billion donation of cash and technology to schools and non-profits over the next five years. It’s branding this philanthropic mission as Microsoft Elevate, which is billed as “providing people and organizations with AI skills and tools to thrive in an AI-powered economy.” It will also start the AI Economy Institute (AIEI), a so-called corporate think tank stocked with academics that will be publishing research on how the workforce needs to adapt to AI tech.

The bulk of the money will go toward AI and cloud credits for K-12 schools and community colleges, and Redmond claims 20 million people will “earn an in-demand AI skilling credential” under the scheme, although Microsoft’s record on such vendor-backed certifications is hardly spotless.

“Working in close coordination with other groups across Microsoft, including LinkedIn and GitHub, Microsoft Elevate will deliver AI education and skilling at scale,” said Brad Smith, president and vice chair of Microsoft Corporation, in a blog post. “And it will work as an advocate for public policies around the world to advance AI education and training for others.”

It’s not an entirely new scheme – Redmond already had its Microsoft Philanthropies and Tech for Social Impact charitable organizations, but they are now merging into Elevate. Smith noted Microsoft has already teamed up with North Rhine-Westphalia in Germany to train students on AI, and says similar partnerships across the US education system will follow.

Microsoft is also looking to recruit teachers to the cause.

On Tuesday, Microsoft, along with Anthropic and OpenAI, said it was starting the National Academy for AI Instruction with the American Federation of Teachers to train teachers in AI skills and to pass them on to the next generation. The scheme has received $23 million in funding from the tech giants spread over five years, and aims to train 400,000 teachers at training centers across the US and online.

“AI holds tremendous promise but huge challenges—and it’s our job as educators to make sure AI serves our students and society, not the other way around,” said AFT President Randi Weingarten in a canned statement.

“The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.”

Meanwhile, the AIEI will sponsor and convene researchers to produce publications, including policy briefs and research reports, on applying AI skills in the workforce, leveraging a global network of academic partners.

Hopefully they can do a better job of it than Redmond’s own staff. After 9,000 layoffs from Microsoft earlier this month, largely in the Xbox division, Matt Turnbull, an executive producer at Xbox Game Studios Publishing, went viral with a spectacularly tone-deaf LinkedIn post (now removed) to former staff members offering AI prompts “to help reduce the emotional and cognitive load that comes with job loss.” ®



Source link

Continue Reading

Trending