Connect with us

Ethics & Policy

ABA ethics rules related to Generative AI

Published

on


To ensure ethical AI use, lawyers should look to today’s ethics rules 

Before we dive into specific rules, it’s important to note that ethical use of generative AI is predicated on the understanding that this technology is a legal assistant, not a lawyer.

Lawyers must exercise the same caution with AI-generated work as they would with work produced by a junior associate or paralegal. In each case, it’s essential to use independent judgment to review and finalize the work product.

Ethical use of generative AI also assumes that:

  • The AI is developed responsibly by developers;
  • The user understands how the AI works, including what it can and cannot do; and
  • The user is always in control of the technology and accountable for its use.

The responsibility for using AI ethically falls on both the legal professionals who employ it and the developers who create the technology. Developers should take measures to educate the user on how AI works, and users must be intentional about learning the AI’s capabilities and understand their ongoing commitments to using AI ethically and responsibility.

To that end, we explore how today’s rules of professional conduct — the ABA’s Model Rules of Professional Conduct, specifically — apply to lawyers’ use of legal AI.

Jump to ↓

Responsibilities


Competence


Candor toward the tribunal


Communication


Confidentiality of information


Recent changes to the Model Rules

Rules 5.1 and 5.3 — Responsibility

Under Rule 5.1, Responsibilities of a Partner or Supervisory Lawyer, and Rule 5.3 , Responsibilities Regarding Nonlawyer Assistance, of the ABA’s Rule of Professional Conduct (RPC) — lawyers are required to oversee both lawyers and nonlawyers who help them provide legal services to ensure their conduct complies with the RPC.

Notably, Rule 5.3’s language covers responsibilities regarding nonlawyer “assistance,” rather than “assistants,” a critical change to the Rule’s title made in 2012. The effect of this change was to expand the ethical obligation to non-human assistance, including the work generated by technology such as legal AI that’s used in the provision of legal services.

The bottom line is that non-human legal assistance is within the scope of the ABA’s rules, and you must supervise an AI legal assistant just as you would any other legal assistant.

Rule 1.1 — Competence

A lawyer’s duty to be technologically competent is recognized in Rule 1.1 of the ABA’s RPC, which requires lawyers to provide competent representation to a client. The duty of technological competence is specifically set forth in Comment 8 to the rule, which states that to maintain the knowledge and skill necessary for competent representation, a lawyer should “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology,” and do so by way of continuing education.

Note the expectation that lawyers are to keep up with new technology (such as AI) is assumed with the language “keep abreast of changes.” Comment 8, which was part of the ABA’S 2012 amendment to the RPC, was added in light of cloud computing and technology such as smartphones and tablets, which were becoming increasingly widespread in law practice.

Since then, the RPC has been amended to include AI. In 2019, the ABA adopted Resolution 112, urging courts and lawyers to address ethical and legal issues related to AI use, including “bias, explainability, and transparency of automated decisions made by AI” and the “controls and oversight of AI and the vendors that provide AI.”

The takeaway is that the duty of technological competence requires an understanding of relevant technology—and in today’s world, that includes AI. Efforts should be made to engage in learning opportunities, such as webinars and CLEs, to ensure you understand the “benefits and risks” associated with AI.

Rule 3.3 — Candor toward the tribunal

Rule 3.3 sets forth the special duties of lawyers as officers of the court, including the obligation to “avoid conduct that undermines the integrity of the adjudicative process.” Comment 2 to Rule 3.3 states lawyers “must not allow the tribunal to be misled by false statements of law or fact or evidence that the lawyer knows to be false,” while Comment 3 provides that lawyers are “responsible for pleadings and other documents prepared for litigation.”

An example of failure to follow these rules when using general-use generative AI in practice can be found in Avianca v Mata — more widely known as the “ChatGPT lawyer” incident. In short, the defense counsel filed a brief in federal court (the E.D.N.Y., no less) filled with citations to non-existent case law. When confronted by the judge, the lawyer explained he’d used ChatGPT to draft the brief, and claimed he was unaware the AI could hallucinate cases (despite the disclaimer directly beneath the chat box).

The judge didn’t take kindly to the lawyer’s laying blame on ChatGPT. It’s clear from the court’s decision that misunderstanding technology isn’t a defense for misusing technology, and that the lawyer was still obligated to verify the cases cited in documents he filed with the court.

There are several ways this situation can be avoided. First and foremost, you shouldn’t rely on general-use AI such as ChatGPT, which doesn’t draw from a reliable source of law. Instead, legal AI should be used because it is limited to a reliable, up-to-date source of information. For example, CoCounsel draws from Westlaw’s database of case law, statutes, and regulations. It also shows its work by providing links to the cases, making it easy to check your work.

Secondly, you should understand the risk of using the AI before using it — see Rule 1.1 regarding technological competence, above — and must check the veracity of the AI’s output as required by Rules 5.1 and 5.3. Finally, such debacles can be avoided by disclosing AI use to the court.

Rule 1.4 — Communication

It’s a good idea to disclose AI use to clients, too. Comment 1 to Rule 1.4 on client communications states: “Reasonable communication between the lawyer and the client is necessary for the client effectively to participate in the representation.” Comment 3 provides that the Rule “requires the lawyer to reasonably consult with the client about the means to be used to accomplish the client’s objectives.”

How should these rules be applied in practice? If you’re using AI in the provision of legal services to your clients, explain your use to them. Be transparent with clients about how you’ll use AI—and be ready to explain how it works, and address any privacy and security concerns.

Additionally, there are several ways to disclose your AI use. One option is to disclose this in fee agreements or retention letters to clients.

Some firms, though, because AI is considered “tech,” don’t distinguish it from any other tech used, which in most terms and conditions, privacy policies, or engagement letters is used as an umbrella term rather than an introduction to an app-by-app list. Even so, any firm that takes this approach should still be ready to give a thorough answer to anyone who asks about AI use.

Rule 1.6 — Confidentiality of information

Comment 2 to Rule 1.6 states lawyers must not reveal information relating to representation to the client, unless they have the client’s informed consent, while Comment 18 requires lawyers to:

“act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client.”

This rule comes into play not only when using AI, but also when selecting AI. Generative AI is built with a network of technology and partnerships, such as cloud storage and third-party data processing agreements. To that end, look for legal AI that is private, secure, and built by experienced developers—such as CoCounsel, which is carefully engineered to eliminate security and data privacy risks.

Recent changes to the Model Rules

Please refer to the ABA website for the most recent changes to the Model Rules.

Legal AI is meant to serve as a legal assistant, not as a substitute for a lawyer, and lawyers should look to existing ethics rules to help guide their use. By choosing reliable, specific-use AI and using it responsibly, lawyers can tap into this powerful technology to improve their practice and better serve their clients.

Related blog

Related blog

Ethical uses of generative AI in the practice of law

View blog ↗



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

AI and ethics – what is originality? Maybe we’re just not that special when it comes to creativity?

Published

on


I don’t trust AI, but I use it all the time.

Let’s face it, that’s a sentiment that many of us can buy into if we’re honest about it. It comes from Paul Mallaghan, Head of Creative Strategy at We Are Tilt, a creative transformation content and campaign agency whose clients include the likes of Diageo, KPMG and Barclays.

Taking part in a panel debate on AI ethics at the recent Evolve conference in Brighton, UK, he made another highly pertinent point when he said of people in general:

We know that we are quite susceptible to confident bullshitters. Basically, that is what Chat GPT [is] right now. There’s something reminds me of the illusory truth effect, where if you hear something a few times, or you say it here it said confidently, then you are much more likely to believe it, regardless of the source. I might refer to a certain President who uses that technique fairly regularly, but I think we’re so susceptible to that that we are quite vulnerable.

And, yes, it’s you he’s talking about:

I mean all of us, no matter how intelligent we think we are or how smart over the machines we think we are. When I think about trust, – and I’m coming at this very much from the perspective of someone who runs a creative agency – we’re not involved in building a Large Language Model (LLM); we’re involved in using it, understanding it, and thinking about what the implications if we get this wrong. What does it mean to be creative in the world of LLMs?

Genuine

Being genuine, is vital, he argues, and being human – where does Human Intelligence come into the picture, particularly in relation to creativity. His argument:

There’s a certain parasitic quality to what’s being created. We make films, we’re designers, we’re creators, we’re all those sort of things in the company that I run. We have had to just face the fact that we’re using tools that have hoovered up the work of others and then regenerate it and spit it out. There is an ethical dilemma that we face every day when we use those tools.

His firm has come to the conclusion that it has to be responsible for imposing its own guidelines here  to some degree, because there’s not a lot happening elsewhere:

To some extent, we are always ahead of regulation, because the nature of being creative is that you’re always going to be experimenting and trying things, and you want to see what the next big thing is. It’s actually very exciting. So that’s all cool, but we’ve realized that if we want to try and do this ethically, we have to establish some of our own ground rules, even if they’re really basic. Like, let’s try and not prompt with the name of an illustrator that we know, because that’s stealing their intellectual property, or the labor of their creative brains.

I’m not a regulatory expert by any means, but I can say that a lot of the clients we work with, to be fair to them, are also trying to get ahead of where I think we are probably at government level, and they’re creating their own frameworks, their own trust frameworks, to try and address some of these things. Everyone is starting to ask questions, and you don’t want to be the person that’s accidentally created a system where everything is then suable because of what you’ve made or what you’ve generated.

Originality

That’s not necessarily an easy ask, of course. What, for example, do we mean by originality? Mallaghan suggests:

Anyone who’s ever tried to create anything knows you’re trying to break patterns. You’re trying to find or re-mix or mash up something that hasn’t happened before. To some extent, that is a good thing that really we’re talking about pattern matching tools. So generally speaking, it’s used in every part of the creative process now. Most agencies, certainly the big ones, certainly anyone that’s working on a lot of marketing stuff, they’re using it to try and drive efficiencies and get incredible margins. They’re going to be on the race to the bottom.

But originality is hard to quantify. I think that actually it doesn’t happen as much as people think anyway, that originality. When you look at ChatGPT or any of these tools, there’s a lot of interesting new tools that are out there that purport to help you in the quest to come up with ideas, and they can be useful. Quite often, we’ll use them to sift out the crappy ideas, because if ChatGPT or an AI tool can come up with it, it’s probably something that’s happened before, something you probably don’t want to use.

More Human Intelligence is needed, it seems:

What I think any creative needs to understand now is you’re going to have to be extremely interesting, and you’re going to have to push even more humanity into what you do, or you’re going to be easily replaced by these tools that probably shouldn’t be doing all the fun stuff that we want to do. [In terms of ethical questions] there’s a bunch, including the copyright thing, but there’s partly just [questions] around purpose and fun. Like, why do we even do this stuff? Why do we do it? There’s a whole industry that exists for people with wonderful brains, and there’s lots of different types of industries [where you] see different types of brains. But why are we trying to do away with something that allows people to get up in the morning and have a reason to live? That is a big question.

My second ethical thing is, what do we do with the next generation who don’t learn craft and quality, and they don’t go through the same hurdles? They may find ways to use {AI] in ways that we can’t imagine, because that’s what young people do, and I have  faith in that. But I also think, how are you going to learn the language that helps you interface with, say, a video model, and know what a camera does, and how to ask for the right things, how to tell a story, and what’s right? All that is an ethical issue, like we might be taking that away from an entire generation.

And there’s one last ‘tough love’ question to be posed:

What if we’re not special?  Basically, what if all the patterns that are part of us aren’t that special? The only reason I bring that up is that I think that in every career, you associate your identity with what you do. Maybe we shouldn’t, maybe that’s a bad thing, but I know that creatives really associate with what they do. Their identity is tied up in what it is that they actually do, whether they’re an illustrator or whatever. It is a proper existential crisis to look at it and go, ‘Oh, the thing that I thought was special can be regurgitated pretty easily’…It’s a terrifying thing to stare into the Gorgon and look back at it and think,’Where are we going with this?’. By the way, I do think we’re special, but maybe we’re not as special as we think we are. A lot of these patterns can be matched.

My take

This was a candid worldview  that raised a number of tough questions – and questions are often so much more interesting than answers, aren’t they? The subject of creativity and copyright has been handled at length on diginomica by Chris Middleton and I think Mallaghan’s comments pretty much chime with most of that.

I was particularly taken by the point about the impact on the younger generation of having at their fingertips AI tools that can ‘do everything, until they can’t’. I recall being horrified a good few years ago when doing a shift in a newsroom of a major tech title and noticing that the flow of copy had suddenly dried up. ‘Where are the stories?’,  I shouted. Back came the reply, ‘Oh, the Internet’s gone down’.  ‘Then pick up the phone and call people, find some stories,’ I snapped. A sad, baffled young face looked back at me and asked, ‘Who should we call?’. Now apart from suddenly feeling about 103, I was shaken by the fact that as soon as the umbilical cord of the Internet was cut, everyone was rendered helpless. 

Take that idea and multiply it a billion-fold when it comes to AI dependency and the future looks scary. Human Intelligence matters



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing’s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing”s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Trending