Connect with us

Ethics & Policy

A lot has changed since we created AI ethics guidelines for newsrooms. Here’s what you need to know now

Published

on


ST. PETERSBURG, FL (June 26, 2025) — More than a year ago, the Poynter Institute published a “starter kit” for newsrooms to create their own ethics policies for using artificial intelligence in their journalism. AI use in newsrooms has grown swiftly since then — and gotten more complex — and the team behind the starter kit has just published a new update, adding more information for visual journalism and for those developing products in newsrooms.

“One of the biggest things we heard over the last year was, the editorial guidelines are great, but what do your visual teams do? What is allowed there?” said Poynter faculty member and MediaWise director Alex Mahadevan, who helps lead Poynter’s AI work. “And people building these products — chatbots and such — what are their ethical obligations?”

The new update includes a section to walk newsrooms through the considerations for using AI in visual work, and cautions: “The use of visual generative AI tools exposes newsrooms to the most risk in regards to audience trust, and should be discussed in depth at your newsroom.” The guidelines put a value on human coverage as the preferred option if possible, accuracy over aesthetics, no manipulation of real people and events and more. 

The toolkit doesn’t prescribe a newsroom’s use of the technology, on visual AI or in other areas. Some newsrooms might ban AI use in any image creation. Some might allow it with certain circumstances and disclosures. The toolkit helps newsrooms create a formal ethics policy based on their comfort level with certain uses and how they’ll communicate that to their audience.

“This is to help newsrooms,” Mahadevan said. “It really starts with thinking about what your values are and building the policies around that.”

Much of the new material was devised and workshopped during Poynter’s second Summit on AI, Ethics and Journalism, which took place in New York City in April, this time in partnership with The Associated Press. 

Poynter published its first guidelines in March 2024, and refined them further following its first AI Summit in St. Petersburg in June 2024. Much has changed in AI use in journalism since then, Poynter leaders said.

“Before we published these guidelines, most of the newsrooms that I talk to were avoiding AI, thinking that no good could come from it,” said Kelly McBride, Poynter senior vice president and chair of the Craig Newmark Center for Ethics and Leadership. “Many newsrooms are now doing tiny experiments, and many more are contemplating adding AI to their workflows.”

“A lot more people are using AI tools and recognizing that AI is not something we can run from. Again, there is such a demand for access to guardrails. A lot of news organizations have gone from these ad hoc experimentations to ‘how are we going to build this into our CMS?’ ” Mahadevan said. “The phrase I keep hearing is operationalizing AI — how are we building AI into our workflow in a way that is valuable to our readers?”

Another important update is about news products, again emphasizing the values of transparency, fairness, human oversight and audience trust. Specific sections talk about guarding against biases inherent in AI and avoiding creating a reinforcing viewpoint bubble.

“One of the things we talked about a lot is what are we going … to encourage newsrooms to do to avoid echo chambers if they are personalizing content?” Mahadevan said. “We put in these guidelines as a suggestion — the idea of breaking the bubble — so we have something in there about making sure when you design personalization, you do it to expose audiences to a broad range of stories and viewpoints.”

The authors acknowledged that the starter kit for newsrooms is a long document because it asks them to thoughtfully consider and prepare for lots of situations — but it’s easier than getting into trouble because you haven’t planned.

“It takes a small group of journalists a couple hours to fill in the guidelines, run it up the flagpole to news executives, and then issue a memo to the entire staff,” McBride said. “So it’s not completely plug and play, but it’s worth the time investment, because journalism ethics work best when there is local autonomy and buy-in.”

To help news organizations share their ethics policies with their audiences, Poynter also has created a short, to-the-point, public-facing document that will give news consumers the basics in a digestible format. News organizations can post it on their website, explaining the ways they use AI and how they communicate it, and link to their full ethics policy. 

Poynter’s work with AI and AI ethics has also grown significantly in the last year, including expanding its AI training options and adding AI as a category of its new Consulting & Coaching offerings tailored to specific newsrooms’ needs. Mahadevan, McBride and faculty member Tony Elkins continue to teach and present on AI issues at conferences and meetings across the world, and Poynter experts are in-demand media sources on AI topics. 

In May, Poynter’s MediaWise media literacy program launched the Talking About AI Newsroom Toolkit to help journalists explain how they are using AI to their audiences. That project was completed in partnership with The Associated Press and funded by Microsoft. 

Media Contact

Jennifer Orsi
Vice President, Publishing and Local News Initiatives
The Poynter Institute
Jorsi@poynter.org

About The Poynter Institute

The Poynter Institute is a global nonprofit working to address society’s most pressing issues by teaching journalists and journalism, covering the media and the complexities facing the industry, convening and community building, improving the capacity and sustainability of news organizations and fostering trust and reliability of information. The Institute is a gold standard in journalistic excellence and dedicated to the preservation and advancement of press freedom in democracies worldwide. Through Poynter, journalists, newsrooms, businesses, big tech corporations and citizens convene to find solutions that promote trust and transparency in news and stoke meaningful public discourse. The world’s top journalists and emerging media leaders rely on the Institute to learn new skills, adopt best practices, better serve audiences, scale operations and improve the quality of the universally shared information ecosystem.

The Craig Newmark Center for Ethics and Leadership, the International Fact-Checking Network (IFCN), MediaWise and PolitiFact are all members of the Poynter organization.

Support for Poynter and our entities upholds the integrity of the free press and the U.S. First Amendment and builds public confidence in journalism and media — essential for healthy democracies. Learn more at poynter.org.

 



Source link

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing’s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing”s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Lavender’s Role in Targeting Civilians in Gaza

Published

on


The world today is war-torn, starting with Russia’s attacks on Ukraine to Israel’s devastation in Palestine and now in Iran, putting the entire West Asia in jeopardy.

The geometrics of war has completely changed, from Blitzkrieg (lightning war) in World War II to the use of sophisticated and technologically driven missiles in these latest armed conflicts. The most recent wars are being driven by use of artificial intelligence (AI) to narrow down potential targets.

There have been multiple evidences which indicate that Israeli forces have deployed novel AI-driven targeting tools in Gaza. One system, nicknamed “Lavender” is an AI-enabled database that assigns risk scores to Gazans based on patterns in their personal data (communication, social connections) to identify “suspected Hamas or Islamic Jihad operatives”. Lavender has flagged up to 37,000 Palestinians as potential targets early in the war.

A second system, “Where is Daddy?”, uses mobile phone location tracking to notify operators when a marked individual is at home. The initial strikes using these automated generated systems targeted individuals in their private homes on the pretext of targeting the terrorists. But innocent women and young children also lost their lives in these attacks. This technology was developed as a replacement of human acumen and strategy to identify and target the suspects.

According to the Humans Rights Watch report (2024), around 70 per cent of people who have lost lives were women and children. The United Nations agency has also verified the details of 8,119 victims killed in Gaza from November 2023 to April 2024. The report showed that 44 per cent of the victims were children and 26 per cent were women. The humans are merely at the mercy of this sophisticated technology that identified the suspected militants and targeted them.

The use of AI-based tools like “Lavender” and “Where’s Daddy?” by Israel in its war against Palestine raises serious questions about the commitment of countries to the international legal framework and the ethics of war. Use of such sophisticated AI targeted tools puts the weaker nations at the dictate of the powerful nations who can use these technologies to inflict suffering for the non-combatants.

The international humanitarian law (IHL) and international human rights law (IHRL) play a critical yet complex role in the context of AI during conflict situations such as the Israel-Palestine Conflict. Such AI-based warfare violates the international legal framework principles of distinction, proportionality and precaution.

The AI systems do not inherently know who is a combatant. Investigations report that Lavender had an error rate on the order of 10 per cent and routinely flagged non-combatants (police, aid workers, people who merely shared a name with militants). The reported practice of pre-authorising dozens of civilian deaths per strike grossly violates the proportionality rule.

An attack is illegal if incidental civilian loss is “excessive” in relation to military gain. For example, one source noted that each kill-list target came with an allowed “collateral damage degree” (often 15–20) regardless of the specific context. Allowing such broad civilian loss per target contradicts IHL’s core balancing test (ICRC Rule 14).

The AI-driven process has eliminated normal safeguards (verification, warnings, retargeting). IHRL continues to apply alongside IHL in armed conflict contexts. In particular, the right to life (ICCPR Article 6) obliges states to prevent arbitrary killing.

The International Court of Justice has held that while the right to life remains in force during war, an “arbitrary deprivation of life” must be assessed by reference to the laws of war. In practice, this means that IHL’s rules become the benchmark for whether killings are lawful.

However, even accepting lex specialis (law overriding general law), the reported AI strikes raise grave human rights concerns especially the Right to Life (ICCPR Art. 6) and Right to Privacy (ICCPR Art. 17).

Ethics of war, called ‘jus in bello’ in the legal parlance, based on the principles of proportionality (anticipated moral cost of war) and differentiation (between combatants and non-combatants) has also been violated. Article 51(5) of Additional Protocol I of the 1977 Geneva Convention said that “an attack is disproportionate, and thus indiscriminate, if it may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and military advantage”.

The Israel Defense Forces have been indiscriminately using AI to target potential targets. These targets though aimed at targeting militants have been extended to the non-military targets also, thus causing casualties to the civilians and non-combatants. Methods used in a war is like a trigger which once warded off is extremely difficult to retract and reconcile. Such unethical action creates more fault lines and any alternate attempt at peace resolution and mediation becomes extremely difficult.

The documented features of systems like Lavender and Where’s Daddy, based on automated kill lists, minimal human oversight, fixed civilian casualty “quotas” and use of imprecise munitions against suspects in homes — appear to contravene the legal and ethical principles.

Unless rigorously constrained, such tools risk turning warfare into arbitrary slaughter of civilians, undermining the core humanitarian goals of IHL and ethics of war. Therefore, it is extremely important to streamline the unregulated use of AI in perpetuating war crimes as it undermines the legal and ethical considerations of humanity at large.



Source link

Continue Reading

Trending