Connect with us

Ethics & Policy

How to instil AI ethics in your organisation

Published

on


The rapid adoption of artificial intelligence (AI) is creating new risks for businesses. As companies hand over more autonomy to computer systems, they may inadvertently violate their ethical standards and the rights of customers and others.

In response, companies have scrambled to set new checks and balances on their AI use. Academics are debating the ethical and safe usage of AI, and consultancies increasingly offer advice on the topic. And major companies from Microsoft to Unilever have established AI ethics programmes.

“The kinds of situations that we see are legal, reputational, and ethical,” said Michael Brent, Ph.D., the Colorado-based director of responsible AI at Boston Consulting Group. “If you use AI in a way that harms people, it can damage your brand, it can damage how employees perceive themselves.”

Brent focuses on reviewing proposed uses of AI to identify and mitigate potential harms that can include violating people’s data privacy and following AI-generated decisions that are biased against disadvantaged and vulnerable groups.

Multiple models to address

With the increasing pace of adoption, the question of AI ethics is no longer merely a philosophical one.

Until recently, “it was the … Wild West as they say”, said Mfon Akpan, CGMA, DBA, an assistant professor of accounting at Methodist University in the US, who recently co-wrote a paper on ethical accounting and AI.

Now, however, companies are deploying several strategies to ensure ethical AI usage.

“Over the last 12 to 24 months, the true insurgence of [large language model] capabilities into the marketplace has been this critical moment of companies taking [responsible AI] a lot more seriously,” said Sasha Pailet Koff, CPA, CGMA, consultant and former senior supply chain executive at Dell and Johnson & Johnson.

“You see dedicated AI teams that are responsible for reviewing AI projects. You may see ethical committees that are clearing different types of AI cases against legal, technical, and societal norms,” said Pailet Koff, who is based in the US in New Jersey and now leads the digital transformation consultancy So Help Me Understand.

Leading organisations are also enlisting third-party experts and crafting AI governance frameworks that provide comprehensive instructions for assessing questions of fairness, transparency, accountability, and privacy.

Organisations that are deploying AI on a large scale should even consider elevating AI to the C-suite with a chief ethics officer, Brent advised. (See the sidebar “Rise of the Chief AI Ethics Officer”.)

Rajeev Chopra, FCMA, CGMA, a consultant who previously worked in the airline industry, said that responsible AI requires data scientists, legal expertise, and executive leadership.

“Absolutely encourage AI, machine learning, all the latest technologies, but create a good corporate governance structure and make sure that you are seen as a role model for implementing the new technologies,” said Chopra, who is based in India.

Anna Huskowska, ACMA, CGMA, is a divisional head of central planning at Etex, a construction materials company based in Belgium. She said companies should aim to diffuse AI ethics throughout the organisation.

“The idea is [also] to transfer the knowledge to different parts of the organisation,” she said. “You want to consider how it impacts the business and the client.”

What can go wrong?

The experts interviewed for this FM article identified several ethical risks that can come with AI — and shared strategies that companies may use to address them.

Bias

AI models can make decisions that reinforce damaging social biases.

For example, Sweden’s social insurance agency has been under fire for using a machine-learning system on the alleged grounds it disproportionately flagged applications from women, foreigners, low-income earners, and people without university degrees for further benefit fraud investigation.

Meanwhile, some AI recruiting tools are accused of propagating bias and rejecting qualified candidates. A recent University of Washington study found three large language models exhibited racial, gender, and intersectional bias in how they ranked CVs.

Combating algorithmic bias requires mathematical and technological expertise. Brent’s team at BCG, for example, uses a battery of statistical tests to determine whether AI products are exhibiting bias.

“You need the technical expertise. You need a reliable person who can tell you what … is going on,” Brent said.

Transparency and accountability

The use of “black box” algorithms can exacerbate ethical issues. Generative and predictive AI technologies may not explain why and how they reach conclusions, making it harder to assess whether results are biased, flawed, or false.

That lack of transparency creates the risk that accountants and others may violate their duty to do their work with care, transparency, and accountability.

“One way of managing that risk is to make sure that there is this transparency, [a] push on transparency, on the companies that are providing these AI models,” Huskowska said.

Pailet Koff agreed that transparency and accountability are key. “Who’s responsible for the AI? And when it does potentially make an incorrect and harmful decision, who’s responsible for owning up to that?” she said.

Data privacy

Data privacy and security is both a practical and ethical concern for those managing AI and other tech deployments.

Digital ethicists — and European lawmakers, amongst others — have recognised that people have a right to privacy. Collecting, sharing, and using data without explicit consent may be a legal and ethical breach.

“Are organisations inadvertently exposing sensitive customer information or potentially employee data?” Pailet Koff asked.

Leaked data may create security risks for users, allowing unwanted third parties to access their information or letting generative AI models use and learn from their data without their consent.

“The ethical concern obviously is your privacy,” Chopra said. Companies increasingly are using AI-powered tools to combat fraud and other risks. But those tools may require collecting and analysing large volumes of customer information — for example, analysing customers’ patterns can help to detect potentially fraudulent charges on their accounts. Companies must be vigilant to protect the data they’re using in these efforts, limiting access and ensuring it’s not leaked into public view. Additionally, they must be cautious when using third-party services to combat fraud and other risks; sharing customer data with those companies may raise ethical and security risks for customers.

Steps to address ethical risks

Brent and other experts identified four key steps for assessing and addressing AI ethics risks.

Categorise use cases according to a risk taxonomy

The EU’s Artificial Intelligence Act sets out several categories of AI risk, from “minimal” to “unacceptable”.

For the purposes of the law, “unacceptable” risk includes uses like social scoring, facial recognition, and manipulating people. The EU AI Act applies directly to businesses operating in the EU and also those outside — if they have a role in the AI value chain that touches the EU.

Brent said that risk categorisation is a wise first step — helping companies to apply a standardised metric and identify areas of concern.

Assess use cases

Next, examine potential negative impacts by conducting brainstorming and planning exercises. Ask participants to identify the project’s potential effects on different groups of people, or ask them to map out worst-case scenarios. Additionally, conduct research into comparable uses by other companies, and consult with experts.

AI risk can also be assessed quantitatively with mathematical measures that can indicate whether algorithms are displaying demographic bias.

“Look for mathematical evidence of bias,” Brent advised. “You can take the qualitative and the quantitative measures and identify the performance of an AI system, and then try to build the mitigations.”

Design mitigations

Though not all ethical risks can be counteracted, some can be mitigated through technical adjustments, legal and contractual changes, transparency, and training.

For example, adjusting the model’s design and its supporting data can combat algorithmic bias. The model can also be forced to document its decision-making and analysis more transparently, which may expose bias.

A company contracting with an AI provider can also write the contract in a way that requires the provider to protect against perceived risks.

Test, evaluate, and provide documentation

Ultimately, the AI deployment team must polish and prepare the project for its user.

Besides testing that the mitigations work and evaluating the system’s performance, this phase involves delivering documentation and training for the end user. Simply handing over the keys to an AI product may result in unethical and unwise uses. This final step can help ensure that others in the business know how, why, and when the tool should be used.

The power of culture

Companies are developing formal approaches and deploying technological solutions to address AI ethics. But new management structures and technological fixes only go so far. Ultimately, a company must prepare its people for AI.

With the rush to embrace AI, Pailet Koff emphasised the importance of vetting the background and expertise of anyone working with the technology.

“How are you thinking about the vetting of the individuals that you’re putting on the team and the independence of the code?” she asked.

Additionally, companies must watch out for casual misuse of the technology. Even if executives have placed limits on generative AI in the workplace, employees may still freely access consumer products like ChatGPT. This “ghost usage” opens up the possibility of a data breach or the use of an opaque AI model.

“It’s there, it’s free, and people want to be able to be more productive — even without that proper guidance or understanding,” Akpan said. “Understand what your employees are doing. I would assume they’re using it, so how do you talk to them about it?”

AI usage also can raise other cultural issues. For example, if an employee has found a way to cut their workload by several hours a day, how should management respond?

“Is that encouraged or discouraged?” Akpan asked. “Once you have the open dialogue, that information can flow freely across the organisation.”

Leaders should think carefully about the culture of their organisation and how that culture can be adapted to encourage beneficial use of AI, Pailet Koff said.

“Every family has their own rules,” she said. “How are you actually using your organisational norms to promote ethical use of these tools?”

The human impact and larger questions

The implications of AI’s growing usage go far beyond a single project.

Huskowska’s team already uses predictive analytics and is experimenting with generative AI for supply chain forecasting and optimisation. She’s excited by how AI can potentially expand a small team’s reach.

But she also worries about the next generation. Huskowska started her career with a job in cost analysis — a job that taught her a lot, but which now is a prime target for automation.

She wonders how new workers will learn fundamentals when AI has taken over basic tasks.

“It’s just hard to imagine, and I think it’s a risk that we don’t talk about much in terms of how we learn our job as financial managers,” Huskowska said.

Chopra and others raised similar concerns.

“People are losing their jobs, job displacement is happening,” Chopra said. “How are you going to address that?”

Companies should consider how to offer career opportunities for people from diverse backgrounds and how they’ll ensure that anyone in the organisation can develop skills related to the new technology.

“If you want to implement AI as an organisation, how do you make sure that you give equal opportunities for people to learn?” Huskowska asked.

In the bigger picture, countless questions about the ethics of AI remain unresolved in the courts and in public opinion, Brent said.

Who exactly is responsible for the output of an AI model? How autonomous should these systems become? How should people be taught to interact with AI? Do AI’s returns justify its current heavy usage of water and energy?

Those questions may go beyond the direct scope of a finance leader, but in the face of dramatic technology change, everyone needs to consider how people will benefit from — or be harmed by — AI. Ultimately, Chopra said, it’s about keeping people in the picture.

“There are so many areas where this has to be governed very, very carefully,” he said, “and the best way is you must pair human intelligence with AI.”


Some companies are tasking new leadership positions with ensuring the responsible and ethical use of AI and other technology.

For example, IBM has an AI Ethics Board led by its chief privacy and trust officer. Salesforce has an Office of Ethical and Humane Use. Boston Consulting Group has a global team dedicated to reviewing and analysing proposed uses of AI through an ethical lens.

Any company that is establishing positions such as chief information officer, data privacy officer, or chief engineer should also consider creating a specific ethics leadership position, suggested Michael Brent, Ph.D., a director with the BCG team.

“My team helps BCG and our clients identify those risks and mitigate them to the extent that’s possible,” he said. “I’m in the business of avoiding ethical nightmares.”

AI ethics, Brent added, should not simply be lumped into related fields.

“A chief AI ethics officer should not be a risk and compliance officer, should not be a lawyer. It should be someone trained specifically,” he said. “They have to understand specifically what are the technical risks [and] the social, legal, and cultural risks.”

AI ethics teams can help establish standardised processes to guard against ethical risks. Teams should identify and categorise risks, develop mitigations, and ensure users are properly trained.

Of course, the delegation of AI ethics responsibilities will depend on a company’s size, the scale of its AI usage, and other factors. Dedicated AI teams are more common in organisations “further along in their maturity efforts”, said Sasha Pailet Koff, CPA, CGMA, consultant and former senior supply chain executive at Dell and Johnson & Johnson.

Companies may also rely on ethics committees or third-party experts for such responsibilities.

Overall, Pailet Koff said, organisations are increasingly embracing governance frameworks that establish processes, assign responsibilities, and define guidelines for the use of AI. Meanwhile, she said, individuals can educate themselves through guidance and training offered by groups like The Alan Turing Institute and the Partnership on AI.


Andrew Kenney is a freelance writer based in the US. To comment on this article or to suggest an idea for another article, contact Oliver Rowe at Oliver.Rowe@aicpa-cima.com.


LEARNING RESOURCES

Ethics in the World of AI: An Accountant’s Guide to Managing the Risks

This two-hour training session discusses the current uses of AI in business, including nine risk areas, and provides practical suggestions to address these risks effectively.

COURSE

Ethics Without Fear for Accounting and Finance Professionals

This fast-paced and interactive presentation will help you keep your ethical skills sharpened to reduce your fear and raise your courage as you make tough decisions in real time.

COURSE


MEMBER RESOURCES

Articles

What Gen AI Means for Executive Decision-Making,” FM magazine, 9 October 2024

What CFOs Need to Know About Gen AI Risk,” FM magazine, 19 August 2024



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

AI and ethics – what is originality? Maybe we’re just not that special when it comes to creativity?

Published

on


I don’t trust AI, but I use it all the time.

Let’s face it, that’s a sentiment that many of us can buy into if we’re honest about it. It comes from Paul Mallaghan, Head of Creative Strategy at We Are Tilt, a creative transformation content and campaign agency whose clients include the likes of Diageo, KPMG and Barclays.

Taking part in a panel debate on AI ethics at the recent Evolve conference in Brighton, UK, he made another highly pertinent point when he said of people in general:

We know that we are quite susceptible to confident bullshitters. Basically, that is what Chat GPT [is] right now. There’s something reminds me of the illusory truth effect, where if you hear something a few times, or you say it here it said confidently, then you are much more likely to believe it, regardless of the source. I might refer to a certain President who uses that technique fairly regularly, but I think we’re so susceptible to that that we are quite vulnerable.

And, yes, it’s you he’s talking about:

I mean all of us, no matter how intelligent we think we are or how smart over the machines we think we are. When I think about trust, – and I’m coming at this very much from the perspective of someone who runs a creative agency – we’re not involved in building a Large Language Model (LLM); we’re involved in using it, understanding it, and thinking about what the implications if we get this wrong. What does it mean to be creative in the world of LLMs?

Genuine

Being genuine, is vital, he argues, and being human – where does Human Intelligence come into the picture, particularly in relation to creativity. His argument:

There’s a certain parasitic quality to what’s being created. We make films, we’re designers, we’re creators, we’re all those sort of things in the company that I run. We have had to just face the fact that we’re using tools that have hoovered up the work of others and then regenerate it and spit it out. There is an ethical dilemma that we face every day when we use those tools.

His firm has come to the conclusion that it has to be responsible for imposing its own guidelines here  to some degree, because there’s not a lot happening elsewhere:

To some extent, we are always ahead of regulation, because the nature of being creative is that you’re always going to be experimenting and trying things, and you want to see what the next big thing is. It’s actually very exciting. So that’s all cool, but we’ve realized that if we want to try and do this ethically, we have to establish some of our own ground rules, even if they’re really basic. Like, let’s try and not prompt with the name of an illustrator that we know, because that’s stealing their intellectual property, or the labor of their creative brains.

I’m not a regulatory expert by any means, but I can say that a lot of the clients we work with, to be fair to them, are also trying to get ahead of where I think we are probably at government level, and they’re creating their own frameworks, their own trust frameworks, to try and address some of these things. Everyone is starting to ask questions, and you don’t want to be the person that’s accidentally created a system where everything is then suable because of what you’ve made or what you’ve generated.

Originality

That’s not necessarily an easy ask, of course. What, for example, do we mean by originality? Mallaghan suggests:

Anyone who’s ever tried to create anything knows you’re trying to break patterns. You’re trying to find or re-mix or mash up something that hasn’t happened before. To some extent, that is a good thing that really we’re talking about pattern matching tools. So generally speaking, it’s used in every part of the creative process now. Most agencies, certainly the big ones, certainly anyone that’s working on a lot of marketing stuff, they’re using it to try and drive efficiencies and get incredible margins. They’re going to be on the race to the bottom.

But originality is hard to quantify. I think that actually it doesn’t happen as much as people think anyway, that originality. When you look at ChatGPT or any of these tools, there’s a lot of interesting new tools that are out there that purport to help you in the quest to come up with ideas, and they can be useful. Quite often, we’ll use them to sift out the crappy ideas, because if ChatGPT or an AI tool can come up with it, it’s probably something that’s happened before, something you probably don’t want to use.

More Human Intelligence is needed, it seems:

What I think any creative needs to understand now is you’re going to have to be extremely interesting, and you’re going to have to push even more humanity into what you do, or you’re going to be easily replaced by these tools that probably shouldn’t be doing all the fun stuff that we want to do. [In terms of ethical questions] there’s a bunch, including the copyright thing, but there’s partly just [questions] around purpose and fun. Like, why do we even do this stuff? Why do we do it? There’s a whole industry that exists for people with wonderful brains, and there’s lots of different types of industries [where you] see different types of brains. But why are we trying to do away with something that allows people to get up in the morning and have a reason to live? That is a big question.

My second ethical thing is, what do we do with the next generation who don’t learn craft and quality, and they don’t go through the same hurdles? They may find ways to use {AI] in ways that we can’t imagine, because that’s what young people do, and I have  faith in that. But I also think, how are you going to learn the language that helps you interface with, say, a video model, and know what a camera does, and how to ask for the right things, how to tell a story, and what’s right? All that is an ethical issue, like we might be taking that away from an entire generation.

And there’s one last ‘tough love’ question to be posed:

What if we’re not special?  Basically, what if all the patterns that are part of us aren’t that special? The only reason I bring that up is that I think that in every career, you associate your identity with what you do. Maybe we shouldn’t, maybe that’s a bad thing, but I know that creatives really associate with what they do. Their identity is tied up in what it is that they actually do, whether they’re an illustrator or whatever. It is a proper existential crisis to look at it and go, ‘Oh, the thing that I thought was special can be regurgitated pretty easily’…It’s a terrifying thing to stare into the Gorgon and look back at it and think,’Where are we going with this?’. By the way, I do think we’re special, but maybe we’re not as special as we think we are. A lot of these patterns can be matched.

My take

This was a candid worldview  that raised a number of tough questions – and questions are often so much more interesting than answers, aren’t they? The subject of creativity and copyright has been handled at length on diginomica by Chris Middleton and I think Mallaghan’s comments pretty much chime with most of that.

I was particularly taken by the point about the impact on the younger generation of having at their fingertips AI tools that can ‘do everything, until they can’t’. I recall being horrified a good few years ago when doing a shift in a newsroom of a major tech title and noticing that the flow of copy had suddenly dried up. ‘Where are the stories?’,  I shouted. Back came the reply, ‘Oh, the Internet’s gone down’.  ‘Then pick up the phone and call people, find some stories,’ I snapped. A sad, baffled young face looked back at me and asked, ‘Who should we call?’. Now apart from suddenly feeling about 103, I was shaken by the fact that as soon as the umbilical cord of the Internet was cut, everyone was rendered helpless. 

Take that idea and multiply it a billion-fold when it comes to AI dependency and the future looks scary. Human Intelligence matters



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing’s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing”s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Trending