Connect with us

Ethics & Policy

ALECSO issues English version of AI Ethics Code

Published

on


Tunis: The Arab League Educational, Cultural and Scientific Organization (ALECSO) has issued the official English version of the ALECSO AI Ethics Code, which was adopted earlier this year.

The Code features fundamental principles, including the preservation of human dignity, justice, inclusiveness, environmental sustainability, the protection of cultural heritage, transparency, accountability, and technological sovereignty.

It also provides detailed guidelines for the use of AI in education, culture, and scientific research, with a focus on enhancing cooperation between Arab countries and aligning with international standards.

The Code represents a comprehensive ethical and regulatory framework for the responsible use of AI technologies in education, culture, and science in the Arab world.

It reflects the organization’s commitment to promoting human values, preserving cultural identity, and ensuring that AI technologies contribute to achieving sustainable development, while respecting privacy, human rights, and cultural diversity.

It is worth noting that the charter was prepared in Arabic with the participation of experts from the organization’s member states, and was translated into English by the Arab Center for Arabization, Translation, Authorship, and Publication in Damascus.

This allows a wider international audience of decision-makers, researchers, and technology stakeholders to learn about ALECSO’s vision and principles regarding ethical artificial intelligence. 



Source link

Ethics & Policy

OpenAI Merges Teams to Boost ChatGPT Ethics and Cut Biases

Published

on

By


In a move that underscores the evolving priorities within artificial intelligence development, OpenAI has announced a significant reorganization of its Model Behavior team, the group responsible for crafting the conversational styles and ethical guardrails of models like ChatGPT. According to an internal memo obtained by TechCrunch, this compact unit of about 14 researchers is being folded into the larger Post Training team, which focuses on refining AI models after their initial training phases. The shift, effective immediately, sees the team’s leader, Lilian Weng, transitioning to a new role within the company, while the group now reports to Max Schwarzer, head of Post Training.

This restructuring comes amid growing scrutiny over how AI systems interact with users, particularly in balancing helpfulness with honesty. The Model Behavior team has been instrumental in addressing issues like sycophancy—where models excessively affirm user opinions—and mitigating political biases in responses. Insiders suggest the integration aims to streamline these efforts, embedding personality shaping directly into the core refinement process rather than treating it as a separate silo.

Strategic Alignment in AI Development

OpenAI’s decision reflects broader industry trends toward more cohesive AI development pipelines, where behavioral tuning is not an afterthought but a foundational element. Recent user feedback on GPT-5, as highlighted in posts on X (formerly Twitter), has pointed to overly formal or detached interactions, prompting tweaks to make ChatGPT feel “warmer and friendlier” without veering into unwarranted flattery. For instance, OpenAI’s own announcements on the platform in August 2025 detailed the introduction of new chat personalities like Cynic, Robot, Listener, and Nerd, available as opt-in options in settings.

These changes build on earlier experiments, such as A/B testing different personality styles noted by users on X as far back as April 2025. Publications like WebProNews report that the reorganization is partly driven by GPT-5 feedback, emphasizing reductions in sycophantic tendencies and enhancements in engagement through advanced reasoning and safety features.

Implications for Ethical AI and User Experience

The merger could accelerate OpenAI’s ability to iterate on model behaviors, potentially leading to more context-aware interactions that better align with ethical standards. As detailed in a BitcoinWorld analysis, this realignment is crucial for influencing user experience and ethical frameworks, especially in sectors like cryptocurrency and blockchain where AI’s role is expanding. The team’s past work on models since GPT-4 has reduced harmful outputs by significant margins, with one X post claiming a 78% drop in certain biases, though such figures remain unverified by OpenAI.

Critics, however, worry that consolidating teams might dilute specialized focus on nuanced issues like bias management. Industry observers on X have debated the “sycophancy trap,” where tuning for truthfulness risks alienating casual users who prefer comforting responses, creating a game-theory dilemma for developers.

Leadership Shifts and Future Directions

Lilian Weng’s departure from the team leadership marks a notable transition; her expertise in AI safety has been pivotal, and her new project remains undisclosed. OpenAI spokesperson confirmed to StartupNews.fyi that the move is designed to foster closer collaboration, positioning the company to lead in human-AI dialogue evolution.

Looking ahead, this reorganization signals OpenAI’s bet on integrated teams to handle the complexities of next-generation AI. With GPT-5 already incorporating subtle warmth adjustments based on internal tests, as per OpenAI’s X updates, the focus is on genuine, professional engagement that avoids pitfalls like ungrounded praise. For industry insiders, this could mean faster deployment of features that make AI feel more human-like, while upholding values of honesty and utility.

Broader Industry Ripple Effects

The changes at OpenAI are likely to influence competitors, as the quest for balanced AI personalities intensifies. Reports from NewsBytes and Bitget News emphasize how this restructuring enhances post-training interactions, potentially setting new benchmarks for AI ethics. User sentiment on X, including discussions of model selectors and capacity limits, suggests ongoing refinements will be key to retaining loyalty.

Ultimately, as OpenAI navigates these internal shifts, the emphasis on personality could redefine how we perceive and interact with AI, blending technical prowess with empathetic design in ways that resonate across applications from everyday queries to complex problem-solving.



Source link

Continue Reading

Ethics & Policy

$40 Million Series B Raised To Drive Ethical AI And Empower Publishers

Published

on


ProRataAI, a company committed to building AI solutions that honor and reward the work of content creators, has announced the close of a $40 million Series B funding round. The round was led by Touring Capital, with participation from a growing network of investors who share ProRata’s vision for a more equitable and transparent AI ecosystem. This latest investment brings the company’s total funding to over $75 million since its founding just last year, and it marks a significant step forward in its mission to reshape how publishers engage with generative AI.

The company also announced the launch of Gist Answers, ProRata’s new AI-as-a-service platform designed to give publishers direct control over how AI interacts with their content. Gist Answers allows media organizations to embed custom AI search, summarization, and recommendation tools directly into their websites and digital properties. Rather than watching their content be scraped and repurposed without consent, publishers can now offer AI-powered experiences on their own terms—driving deeper engagement, longer user sessions, and more meaningful interactions with their audiences.

The platform has already attracted early-access partners representing over 100 publications, a testament to the growing demand for AI tools that respect editorial integrity and support sustainable business models. Gist Answers is designed to be flexible and intuitive, allowing publishers to tailor the AI experience to their brand’s voice and editorial standards. It’s not just about delivering answers—it’s about creating a richer, more interactive layer of discovery that keeps users engaged and informed.

Beyond direct integration, ProRata is also offering publishers the opportunity to license their content to inform Gist Answers across third-party destinations. More than 700 high-quality publications around the world have already joined this initiative, contributing to a growing network of licensed content that powers AI responses with verified, attributable information. This model is underpinned by ProRata’s proprietary content attribution technology, which ensures that every piece of content used by the AI is properly credited and compensated. In doing so, the company is building a framework where human creativity is not only preserved but actively rewarded in the AI economy.

Gist Answers is designed to work seamlessly with Gist Ads, ProRata’s innovative advertising platform that transforms AI-generated responses into premium ad inventory. By placing native, conversational ads adjacent to AI answers, Gist Ads creates a format that aligns with user intent and delivers strong performance for marketers. For publishers, this means new revenue streams that are directly tied to the value of their content and the engagement it drives.

ProRata’s approach stands in stark contrast to the extractive models that have dominated the early days of generative AI. The company was founded on the belief that the work of journalists, creators, and publishers is not just data to be mined—it’s a vital source of knowledge and insight that deserves recognition, protection, and compensation. By building systems that prioritize licensing over scraping, transparency over opacity, and partnership over exploitation, ProRata is proving that AI can be both powerful and principled.

How the funding will be used: With the Series B funding, ProRata plans to scale its team, expand its product offerings, and deepen its relationships with publishers and content creators around the world. The company is focused on building tools that are not only technologically advanced but also aligned with the values of the people who produce the content that fuels AI. As generative AI continues to evolve, ProRata is positioning itself as a trusted partner for publishers seeking to navigate this new landscape with confidence and integrity.

KEY QUOTES:

“Search has always shaped how people discover knowledge, but for too long publishers have been forced to give that power away. Gist Answers changes that dynamic, bringing AI search directly to their sites, where it deepens engagement, restores control, and opens entirely new paths for discovery.”

Bill Gross, CEO and founder of ProRata

“Generative AI is reshaping search and digital advertising, creating an opportunity for a new category of infrastructure to compensate content creators whose work powers the answers we are relying on daily. ProRata is addressing this inflection point with a market-neutral model designed to become the default platform for attribution and fair monetization across the ecosystem. We believe the shift toward AI-native search experiences will unlock greater value for advertisers, publishers, and consumers alike.”

Nagraj Kashyap, General Partner, Touring Capital

“As a publisher, our priority is making sure our journalism reaches audiences in trusted ways. By contributing our content to the Gist network, we know it’s being used ethically, with full credit, while also helping adopters of Gist Answers deliver accurate, high-quality responses to their readers.”

Nicholas Thompson, CEO of The Atlantic

“The role of publishers in the AI era is to ensure that trusted journalism remains central to how people search and learn. By partnering with ProRata, we’re showing how an established brand can embrace new technology like Gist Answers to deepen engagement and demonstrate the enduring value of quality journalism.”

Andrew Perlman, CEO of Recurrent, owner of Popular Science

“Search has always been critical to how our readers find and interact with content. With Gist Answers, our audience can engage directly with us and get trusted answers sourced from our reporting, strengthened by content from a vetted network of international media outlets. Engagement is higher, and we’re able to explore new revenue opportunities that simply didn’t exist before.”

Jeremy Gulban, CEO of CherryRoad Media

“We’re really excited to be partnering with ProRata. At Arena, we’re always looking for unique and innovative ways to better serve our audience, and Gist Answers allows us to adapt to new technology in an ethical way.”

Paul Edmondson, CEO of The Arena Group, owner of Parade and Athlon Sports



Source link

Continue Reading

Ethics & Policy

Michael Lissack’s New Book “Questioning Understanding”

Published

on

By


Image: https://www.globalnewslines.com/uploads/2025/09/06b23a7a1cd3a9eec5188c16c0896a60.jpg
Photo Courtesy: Michael Lissack

“Understanding is not a destination we reach, but a spiral we climb-each new question changes the view, and each new view reveals questions we couldn’t see before.”

Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding [https://www.amazon.com/Questioning-Understanding-Michael-Lissack/dp/B0FC1S1LYL].” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.

A Thought-Provoking Approach to Scientific Inquiry

In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.

The book features an innovative tete-beche format, offering two entry points for readers: “Questioning right Understanding” or “Understanding right Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.

The Battle Against Sloppy Science

Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”-Dutch for “sloppy science”-characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).

Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”-Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.

“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.

“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning-and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”

AI as the New Frontier of Questioning

As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.

“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.

“It is essential that we engage with AI systems as partners in discovery-through reflective dialogue-rather than relying on them as simple solutions to complex problems.”

He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.

With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.

Recognition and Global Impact

Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.

About “Questioning Understanding”

“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tete-beche edition-two books in one with dual covers and no single entry point-it forces readers to choose where to begin: “Questioning right Understanding” or “Understanding right Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.

As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”-science that consciously examines its own presuppositions.

Image: https://www.globalnewslines.com/uploads/2025/09/a01d49d4c742e01bea6bfeb0a16f3132.jpg
Photo Courtesy: Michael Lissack

About Michael Lissack

Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.

Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)-nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.

As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”

Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com [https://www.lissack.com/] and the Second Order Science Foundation at secondorderscience.org [https://www.secondorderscience.org/].
Media Contact
Company Name: Digital Networking Agency
Email: Send Email [http://www.universalpressrelease.com/?pr=michael-lissacks-new-book-questioning-understanding-explores-the-future-of-scientific-inquiry-and-ai-ethics]
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/

Legal Disclaimer: Information contained on this page is provided by an independent third-party content provider. GetNews makes no warranties or responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you are affiliated with this article or have any complaints or copyright issues related to this article and would like it to be removed, please contact retract@swscontact.com

This release was published on openPR.



Source link

Continue Reading

Trending