Connect with us

Ethics & Policy

The AI Ethics Brief #162: Beyond the Prompt

Published

on


Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. Stay informed on the evolving world of AI ethics with key research, insightful reporting, and thoughtful commentary. Learn more at montrealethics.ai/about.

Follow MAIEI on Bluesky and LinkedIn.

Abhijay Gupta shares reflections on his older brother Abhishek’s life at the Montreal Memorial (April 10, 2025)

Thank you to everyone who joined us on April 10, 2025, whether in person in Montreal or online from around the world, to honour the life and legacy of Abhishek Gupta, founder of the Montreal AI Ethics Institute.

We laughed, we cried, and we shared stories. It was an evening of deep reflection and celebration, one that reminded us not only of Abhishek’s profound impact on the field of AI ethics but also of the global community he helped build with care, conviction, and joy.

A special thank you to Planned for their generous support and hospitality. And to each of you who attended, spoke, or held space with us, your presence meant more than words can express.

  • From Prompts to Policy: Tariffs and the Risks of Vibe Governing with AI

  • New York State Judge Not Amused with AI Avatar

  • AI First, People Second? Shopify’s Hiring Memo Sparks Debate

  • AI Policy Corner: The Colorado State Deepfakes Act

  • How the U.S. Public and AI Experts View Artificial Intelligence – Pew Research Center

  • Amazon’s Privacy Ultimatum Starts Today: Let Echo Devices Process Your Data or Stop Using Alexa – CNET

  • Cyberattacks by AI agents are coming – MIT Technology Review

  • Introducing Claude for Education – Anthropic

  • Meta’s AI research lab is ‘dying a slow death,’ some insiders say. Meta prefers to call it ‘a new beginning’ – Fortune

As AI systems become more capable and more advanced, recent developments in the AI space have prompted us to consider AI’s role in our lives. AI avatars are now appearing in courtrooms, Shopify’s CEO has made new hires contingent on whether AI could do the job, and a recent study found generative AI therapy ranked on par with human interventions amongst a small population of users. New and evolving AI tools continue to blur the line between machine and human roles, including in areas as personal as romance.

At MAIEI, we focus on building civic competency around AI, helping people engage with and understand AI systems more thoughtfully. One helpful lens we often use: Does it add value to what I’m doing? AI can be helpful for simulating difficult conversations, whether it’s a physician practicing hard patient conversations with AI avatars or someone rehearsing an anxiety-inducing conversation they will have with a colleague.

However, this over-reliance comes with tradeoffs where there are instances when it does not add value. Excessive use of generative AI can harm critical thinking skills and lead to blind trust in AI systems. Delegating legal representation to an AI, for example, may also undermine the credibility of your case.

And so, where do we draw the line?

Asking, “Does this add value?” is a good starting point. From there, we begin to establish what we’re comfortable delegating to AI and what we must keep human.

Please share your thoughts with the MAIEI community:

Leave a comment

What Happened: The White House’s new tariff proposal under the Trump Administration, released on April 2, seems to have been influenced, directly or indirectly, by large language models (LLMs). Multiple AI systems (ChatGPT, Claude, Gemini, Grok) produced nearly identical responses when prompted with how the U.S. could “easily” calculate tariffs.

As economic journalist James Surowiecki points out:

Just figured out where these fake tariff rates come from. They didn’t actually calculate tariff rates + non-tariff barriers, as they say they did. Instead, for every country, they just took our trade deficit with that country and divided it by the country’s exports to us.

So we have a $17.9 billion trade deficit with Indonesia. Its exports to us are $28 billion. $17.9/$28 = 64%, which Trump claims is the tariff rate Indonesia charges us. What extraordinary nonsense this is.

📌 MAIEI’s Take and Why It Matters:

This moment reveals a larger shift: the quiet mainstreaming of LLMs into high-stakes geopolitical decision-making. When every major frontier model converges on the same overly simplistic method for tariff calculation, and when that formula shapes real-world policy, it’s clear that technical accuracy alone isn’t enough. We need civic competence, especially within institutions responsible for high-impact decisions.

The issue isn’t just the formula’s limitations (economists and critics alike have widely panned it). It’s the framing of LLMs as know-it-all answer machines, offering complex policy advice stripped of context, nuance, or accountability.

Good questions matter. But so does knowing how to ask them.

As @krishnanrohit aptly put it:

“This is now an AI safety issue.”

And also: “This is Vibe Governing.”

Without proper guardrails, LLMs in governance risk turning vibes into verdicts. The models may sound confident, but their judgment is only as strong as the prompt behind it. And in this case, the stakes aren’t abstract. It’s measured in trillions in global trade and the livelihoods of millions teetering on the balance.

What Happened: Earlier this month, Jerome Dewald attempted to represent himself in front of an appellate panel of New York State judges with an AI-generated avatar. He had struggled with his words in prior legal settings and hoped an AI avatar would be more eloquent. Thus, in court, he began playing a video of an AI-generated man delivering his arguments until he was promptly stopped by a judge, Justice Sallie Manzanet-Daniels of the Appellate Division’s First Judicial Department. She felt misled. While Mr. Dewald had obtained approval to utilize an accompanying video presentation, he had not disclosed his use of an AI avatar. He has since expressed deep regret and written the judges an apology letter.

📌 MAIEI’s Take and Why It Matters:

While Mr. Dewald’s intent may have been genuine, this incident raises broader concerns about human accountability in high-stakes settings. Legal proceedings demand transparency. Those impacted by court decisions have the right to hear directly from the people involved and not AI-generated personas, no matter how polished or perfect.

This also isn’t an isolated case: AI has similarly been used to replace human voices in other significant contexts. In 2023, three Vanderbilt University administrators were heavily criticized for using ChatGPT to generate a message to students addressing the Michigan State University shooting that killed three students and injured five more people. At a moment when students needed empathy and sincerity from real people, generative AI stepped in instead. The motivation was likely understandable, a desire to “say the right thing” in a difficult moment, but these are precisely the circumstances where a human voice matters most.

Meanwhile, in March 2025, Arizona’s Supreme Court launched AI avatars, “Daniel” and “Victoria,” to explain rulings and improve public access to the judicial system. While the goal is admirable, building trust through transparency, there’s a risk that these tools further detach the public from the human decision-makers behind life-altering judgments.

As Chief Justice Ann Timmer of the Arizona Supreme Court puts it:

“We are, at the end of the day, public servants—and the public deserves to hear from us.”

We agree. In moments that call for clarity, compassion, or accountability, AI should support human voices, not replace them.

What Happened: In a recent internal memo, Shopify CEO Tobi Lütke told employees that before requesting additional headcount, they must first demonstrate that AI can’t do the job. The memo, later shared publicly as “it was in the process of being leaked and (presumably) shown in bad faith,” reflects a broader shift in company culture: “Reflexive AI usage” is now a baseline expectation at Shopify. AI proficiency will also factor into performance reviews. Lütke framed the directive as a response to rapid advances in generative AI and a desire to increase productivity without growing the team.

The memo has sparked strong reactions, both supportive and skeptical, including from Wharton professor Ethan Mollick, who noted that while the policy is bold, it leaves critical questions unanswered:

1) What is management’s vision of what the future of work looks like at Shopify? What do people do all day a few years from now?
2) What is the plan for turning self-directed learning into organizational innovation?
3) How are organizational incentives being aligned so that people want to share what they learn rather than hiding it?
4) How do employees get better at using AI?

📌 MAIEI’s Take and Why It Matters:

Shopify’s memo is a clear signal: AI is no longer optional—it’s foundational. But while the mandate sets a high bar for efficiency, it also raises important questions about how we balance automation with learning, collaboration, and organizational health.

Requiring teams to “prove a human is necessary” flips the burden of justification and could accelerate innovation. But without a clear framework for AI upskilling and institutional support for experimentation, there’s a risk that employees fall into compliance mode rather than true capability-building.

The bigger concern is cultural: When AI is treated as a baseline expectation without addressing who gets to learn, experiment, and fail safely, it can deepen divides rather than close them. A future of work built around AI should be inclusive, not performative.

The memo may be the start of something transformative, but only if paired with a vision for what work looks like with AI, not just because of it.

Did we miss anything? Let us know in the comments below.

Leave a comment

Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack for the price of a coffee or making a one-time or recurring donation at montrealethics.ai/donate

Your support sustains our mission of Democratizing AI Ethics Literacy, honours Abhishek Gupta’s legacy, and ensures we can continue serving our community.

For corporate partnerships or larger donations, please contact us at support@montrealethics.ai

The Insights & Perspectives summarized below capture the tension between innovation and accountability at three critical fault lines: electoral integrity, public trust, and personal agency.

From Colorado’s effort to regulate deepfakes to Pew’s findings on public skepticism toward AI and Amazon’s quiet redefinition of privacy defaults, each piece demonstrates that the stakes of AI deployment are not abstract—they are legal, social, and deeply personal.

What emerges is a shared pattern: technology is evolving faster than the guardrails meant to protect those most impacted. Regulation struggles to keep pace. Consent is reinterpreted without notice. And the voices of the public, especially those outside tech and policy circles, are still too often marginalized. The work ahead is not just technical; it is civic, participatory, and moral.

AI Policy Corner: The Colorado State Deepfakes Act

By Ogadinma Enwereazu. This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance.

In 2024, Colorado enacted a law requiring disclosure of AI-generated deepfakes in political campaigns, part of a growing wave of state-level efforts to regulate synthetic and manipulated media. While over 30 states have introduced similar laws, Colorado’s relatively modest penalties highlight the uneven and still-evolving landscape of AI and election regulation.

To dive deeper, read the full article here.

How the U.S. Public and AI Experts View Artificial Intelligence – Pew Research Center

A recent Pew Research Center study (April 3, 2025) reveals a significant perception gap between AI experts and the American public on AI’s role and risks. While experts advocate for responsible innovation, nuanced regulation, and system transparency, the general public expresses deeper concerns about job displacement, algorithmic bias, and inadequate oversight. This division isn’t merely about technical literacy, it reflects fundamentally different priorities, with experts focusing on interdisciplinary governance while the public emphasizes fairness, harm prevention, and democratic protections. The research suggests public skepticism stems not from ignorance but from legitimate discomfort with distant, technocratic decision-making and uncertainty about who truly benefits from AI advancements. As AI becomes further embedded in healthcare, education, and employment, failing to bridge this trust divide threatens to undermine public confidence and worsen structural inequalities. The findings challenge the narrative that innovation is inherently beneficial, calling for more than technical safeguards. Participatory governance, diverse perspectives, and genuine commitment to amplifying the voices of those most affected by technological transformation are essential to reconciling these divergent viewpoints.

To dive deeper, read the full article here.

Amazon’s Privacy Ultimatum Starts Today: Let Echo Devices Process Your Data or Stop Using Alexa – CNET

In a significant policy shift implemented on March 28, 2025, Amazon made cloud-based voice processing mandatory for all Echo devices with its “Alexa Plus” generative AI upgrade, which eliminates users’ ability to prevent voice recordings from being transmitted to Amazon’s servers. While the company asserts that all data is encrypted and promptly deleted after processing, this change fundamentally alters the consent paradigm in ambient AI by converting what was previously a user-controlled privacy setting into a non-negotiable condition of service. The update exemplifies a concerning industry pattern where advanced features increasingly require centralized data collection with diminishing opt-out opportunities that are forcing privacy-conscious consumers to either accept deeper integration within Amazon’s data ecosystem or abandon their devices entirely. Such development raises critical ethical questions about the persistence of meaningful consent when software updates override previously established user choices. As competitors like Apple embrace privacy-preserving technologies, Amazon’s approach highlights a broader realignment in the tech industry where convenience supersedes control and personalization eclipses permission, ultimately posing rising challenges to users’ agency in technologies that gradually permeate their everyday lives.

To dive deeper, read the full article here.

Cyberattacks by AI agents are coming – MIT Technology Review

  1. Summary: AI agents are becoming increasingly capable of executing complex tasks, from scheduling meetings to autonomously hacking systems. While cybercriminals are not yet deploying these agents at scale, research shows they’re capable of conducting sophisticated cyberattacks, including data theft and system infiltration. Palisade Research has created a honeypot system to lure and detect these AI-driven agents. Experts warn that agent-led cyberattacks may become more common soon, as they are faster, cheaper, and more adaptable than human hackers or traditional bots. New benchmarks show AI agents can exploit real-world vulnerabilities even with limited prior information.

  2. Why It Matters: The rise of AI agents marks a turning point in cybersecurity, where autonomous systems could soon outpace human-led attacks in both scale and sophistication. While today’s threats remain largely experimental, the infrastructure for widespread AI-driven attacks is already being tested and refined. Palisade’s proactive detection work highlights the importance of early interventions, yet the unpredictability of AI development suggests we may face a sudden surge in malicious use. To stay ahead, the cybersecurity industry must treat AI agents not just as tools but as potential adversaries requiring entirely new defence mechanisms.

To dive deeper, read the full article here.

Introducing Claude for Education – Anthropic

  1. Summary: Anthropic has released Claude for Education, a specialized version of its language model designed for educational institutions and students. This version helps learners by guiding them through questions rather than providing direct answers, creating personalized study guides, and offering feedback on assignments before deadlines.

  2. Why It Matters: When LLMs first entered classrooms, they were often met with fear and panic, leading to outright bans, like New York City’s in 2023, and a scramble to detect AI-generated content to combat a possible rise in cheating, efforts which have now mostly been discontinued. But the conversation is shifting. Instead of blocking AI, institutions are now exploring how to integrate it meaningfully, as seen with Claude for Education and OpenAI’s AI Academy.

    Still, some educators argue we’re focusing on the wrong problems. As McGill professor Renee Sieber writes, generative AI in education is overwhelmingly framed around students and learning outcomes, while the real opportunity may lie in reducing administrative burdens that often distract from teaching. If AI can automate the bureaucratic demands placed on faculty, it could free up space for deeper, more human-centred education.

To dive deeper, read the full article here.

Meta’s AI research lab is ‘dying a slow death,’ some insiders say. Meta prefers to call it ‘a new beginning’ – Fortune

  1. Summary: Meta’s long-standing AI research lab, FAIR (Fundamental AI Research), is reportedly undergoing a significant internal transformation. According to current and former employees, the lab is “dying a slow death,” citing the departure of key scientists—including McGill professor and Montreal-based Meta head of AI research Joelle Pineau—a decline in internal influence, and fewer ties to Meta’s product teams. The company, however, frames the shift as a “new beginning,” emphasizing a more applied focus for its AI work, with FAIR scientists now working more closely with product development teams.

  2. Why It Matters: FAIR once stood at the forefront of open-ended, “blue sky” AI research, producing influential work on self-supervised learning, large-scale vision systems, and fundamental model design. Its apparent pivot reflects broader industry pressure to commercialize research faster and deliver immediate ROI.

    This mirrors a trend across Big Tech: the centre of gravity for AI research is shifting from foundational science to applied deployment. While this may increase short-term impact, some fear it could narrow the scope of inquiry and reduce long-term innovation. It also raises important questions about the future of public-interest AI research and the incentives shaping what gets studied and what gets shelved.

To dive deeper, read the full article here.

We’d love to hear from you, our readers, about any recent research papers, articles, or newsworthy developments that have captured your attention. Please share your suggestions to help shape future discussions!

Leave a comment



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

AI and ethics – what is originality? Maybe we’re just not that special when it comes to creativity?

Published

on


I don’t trust AI, but I use it all the time.

Let’s face it, that’s a sentiment that many of us can buy into if we’re honest about it. It comes from Paul Mallaghan, Head of Creative Strategy at We Are Tilt, a creative transformation content and campaign agency whose clients include the likes of Diageo, KPMG and Barclays.

Taking part in a panel debate on AI ethics at the recent Evolve conference in Brighton, UK, he made another highly pertinent point when he said of people in general:

We know that we are quite susceptible to confident bullshitters. Basically, that is what Chat GPT [is] right now. There’s something reminds me of the illusory truth effect, where if you hear something a few times, or you say it here it said confidently, then you are much more likely to believe it, regardless of the source. I might refer to a certain President who uses that technique fairly regularly, but I think we’re so susceptible to that that we are quite vulnerable.

And, yes, it’s you he’s talking about:

I mean all of us, no matter how intelligent we think we are or how smart over the machines we think we are. When I think about trust, – and I’m coming at this very much from the perspective of someone who runs a creative agency – we’re not involved in building a Large Language Model (LLM); we’re involved in using it, understanding it, and thinking about what the implications if we get this wrong. What does it mean to be creative in the world of LLMs?

Genuine

Being genuine, is vital, he argues, and being human – where does Human Intelligence come into the picture, particularly in relation to creativity. His argument:

There’s a certain parasitic quality to what’s being created. We make films, we’re designers, we’re creators, we’re all those sort of things in the company that I run. We have had to just face the fact that we’re using tools that have hoovered up the work of others and then regenerate it and spit it out. There is an ethical dilemma that we face every day when we use those tools.

His firm has come to the conclusion that it has to be responsible for imposing its own guidelines here  to some degree, because there’s not a lot happening elsewhere:

To some extent, we are always ahead of regulation, because the nature of being creative is that you’re always going to be experimenting and trying things, and you want to see what the next big thing is. It’s actually very exciting. So that’s all cool, but we’ve realized that if we want to try and do this ethically, we have to establish some of our own ground rules, even if they’re really basic. Like, let’s try and not prompt with the name of an illustrator that we know, because that’s stealing their intellectual property, or the labor of their creative brains.

I’m not a regulatory expert by any means, but I can say that a lot of the clients we work with, to be fair to them, are also trying to get ahead of where I think we are probably at government level, and they’re creating their own frameworks, their own trust frameworks, to try and address some of these things. Everyone is starting to ask questions, and you don’t want to be the person that’s accidentally created a system where everything is then suable because of what you’ve made or what you’ve generated.

Originality

That’s not necessarily an easy ask, of course. What, for example, do we mean by originality? Mallaghan suggests:

Anyone who’s ever tried to create anything knows you’re trying to break patterns. You’re trying to find or re-mix or mash up something that hasn’t happened before. To some extent, that is a good thing that really we’re talking about pattern matching tools. So generally speaking, it’s used in every part of the creative process now. Most agencies, certainly the big ones, certainly anyone that’s working on a lot of marketing stuff, they’re using it to try and drive efficiencies and get incredible margins. They’re going to be on the race to the bottom.

But originality is hard to quantify. I think that actually it doesn’t happen as much as people think anyway, that originality. When you look at ChatGPT or any of these tools, there’s a lot of interesting new tools that are out there that purport to help you in the quest to come up with ideas, and they can be useful. Quite often, we’ll use them to sift out the crappy ideas, because if ChatGPT or an AI tool can come up with it, it’s probably something that’s happened before, something you probably don’t want to use.

More Human Intelligence is needed, it seems:

What I think any creative needs to understand now is you’re going to have to be extremely interesting, and you’re going to have to push even more humanity into what you do, or you’re going to be easily replaced by these tools that probably shouldn’t be doing all the fun stuff that we want to do. [In terms of ethical questions] there’s a bunch, including the copyright thing, but there’s partly just [questions] around purpose and fun. Like, why do we even do this stuff? Why do we do it? There’s a whole industry that exists for people with wonderful brains, and there’s lots of different types of industries [where you] see different types of brains. But why are we trying to do away with something that allows people to get up in the morning and have a reason to live? That is a big question.

My second ethical thing is, what do we do with the next generation who don’t learn craft and quality, and they don’t go through the same hurdles? They may find ways to use {AI] in ways that we can’t imagine, because that’s what young people do, and I have  faith in that. But I also think, how are you going to learn the language that helps you interface with, say, a video model, and know what a camera does, and how to ask for the right things, how to tell a story, and what’s right? All that is an ethical issue, like we might be taking that away from an entire generation.

And there’s one last ‘tough love’ question to be posed:

What if we’re not special?  Basically, what if all the patterns that are part of us aren’t that special? The only reason I bring that up is that I think that in every career, you associate your identity with what you do. Maybe we shouldn’t, maybe that’s a bad thing, but I know that creatives really associate with what they do. Their identity is tied up in what it is that they actually do, whether they’re an illustrator or whatever. It is a proper existential crisis to look at it and go, ‘Oh, the thing that I thought was special can be regurgitated pretty easily’…It’s a terrifying thing to stare into the Gorgon and look back at it and think,’Where are we going with this?’. By the way, I do think we’re special, but maybe we’re not as special as we think we are. A lot of these patterns can be matched.

My take

This was a candid worldview  that raised a number of tough questions – and questions are often so much more interesting than answers, aren’t they? The subject of creativity and copyright has been handled at length on diginomica by Chris Middleton and I think Mallaghan’s comments pretty much chime with most of that.

I was particularly taken by the point about the impact on the younger generation of having at their fingertips AI tools that can ‘do everything, until they can’t’. I recall being horrified a good few years ago when doing a shift in a newsroom of a major tech title and noticing that the flow of copy had suddenly dried up. ‘Where are the stories?’,  I shouted. Back came the reply, ‘Oh, the Internet’s gone down’.  ‘Then pick up the phone and call people, find some stories,’ I snapped. A sad, baffled young face looked back at me and asked, ‘Who should we call?’. Now apart from suddenly feeling about 103, I was shaken by the fact that as soon as the umbilical cord of the Internet was cut, everyone was rendered helpless. 

Take that idea and multiply it a billion-fold when it comes to AI dependency and the future looks scary. Human Intelligence matters



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing’s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing”s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Trending