Ethics & Policy
Ethics and understanding need to be part of AI – Sister Hosea Rupprecht, Daughter of St. Paul

It’s back-to-school time. As students of all ages return to the classroom, educators are faced with all the rapid advancements in artificial intelligence (AI) technology that can help or hinder their students’ learning.
As educators and learners who follow the Lord, we want to use the AI tools that human intelligence has provided according to the values we hold dear. When it comes to AI, there are three main areas for students to remember when they feel the tug to use AI for their schoolwork.
Always use AI in an ethical manner. This means being honest and responsible when it comes to using AI for school. Know your school’s AI policy and follow it.
This is acting in a responsible manner. Just as schools have had policies about cheating, plagiarism or bullying for years, they now have policies about what is and is not acceptable when it comes to the use of AI tools.
Be transparent about your use of AI. For example, I knew what I wanted to say when I sat down to write this article, but I also asked Gemini (Google’s AI chatbot) for an outline of things students should keep in mind when using AI. I wanted to make sure I didn’t miss anything important. So, just like you cite sources from books, articles or websites when you use them, there are already standard ways of citing AI generated content in academic settings.
AI can never replace you. It may be a good starting point to help you get your work organized or even do some editing once you have a first draft, but no AI chatbot has your unique voice. It can’t replace your own thought process, how you analyze a problem or articulate your thoughts about a subject.
2) Good ways to use AI tools (if the policy permits)
My favorite use of AI is to outline. We’ve all known the intimidation of staring at a blank screen just waiting for the words and ideas to flow. AI is a good way to get a kickstart on your work. It can help you brainstorm, but putting your work together should be all yours.
AI can help you when you’re stumped. Perhaps a certain scientific concept you’re studying has your brain in knots. AI can help untangle the mystery. There are specific AI systems designed for education that can act as a tutor, leading you out of the intellectual muck through prompting a student rather than just providing an answer.
Once you have your project or paper done, AI can assist you in checking it over, making sure your grammar is up to par or giving you suggestions on how to improve your writing or strengthen your point of view.
When I used Gemini to make sure I didn’t miss any important tips for this article, this is what I saw at the bottom of the screen: “Gemini can make mistakes, so double-check it.” Other chatbots have the same kind of disclaimer. Take what AI generates based on your prompts with a grain of salt until you check it out. Does what it’s given you make sense? Are there facts that you need to verify?
AI tools are “trained” (or pull from) vast amounts of data floating around online and sometimes the data being accessed is incorrect. Always cross-reference what a chatbot tells you with trusted sources.
Realize that AI can’t do everything. AI systems have limitations. I was giving a talk about AI to a group of our sisters and gave them time to experiment. One sister asked a question using Magisterium AI, which pulls from the official documents of the church. She didn’t get a satisfactory answer because official church teaching can be vague when certain issues are too complex to be explored from every side in a papal encyclical, for example.
Know the limitations of the AI system you are using. AI can’t have insight and abstract thought the way we humans do, it can only simulate human analysis and reasoning.
Be protective of your data. Never put anything personal into a chatbot because that information becomes part of what the bot learns from. The same goes for confidential information. When in doubt, don’t give it to an AI chatbot!
The main thing for both students and teachers to remember is that AI tools are meant to help and enhance learning, not to avoid the educational experience and growth in human understanding.
The recent document from the Vatican addressing AI, “Antiqua et Nova,” states, “Education in the use of forms of artificial intelligence should aim above all at promoting critical thinking.”
“Users of all ages, but especially the young, need to develop a discerning approach to the use of data and content collected on the web or introduced by artificial intelligence systems,” it continues. “Schools, universities, and scientific societies are challenged to help students and professionals to grasp the social and ethical aspects of the development and use of technology.”
Let’s pray that students and teachers will be inspired by the Holy Spirit to always use AI from hearts and minds steeped in the faith of Jesus Christ.
Sister Hosea Rupprecht, a Daughter of St. Paul, is the associate director of the Pauline Center for Media Studies.
Ethics & Policy
$40 Million Series B Raised To Drive Ethical AI And Empower Publishers

ProRataAI, a company committed to building AI solutions that honor and reward the work of content creators, has announced the close of a $40 million Series B funding round. The round was led by Touring Capital, with participation from a growing network of investors who share ProRata’s vision for a more equitable and transparent AI ecosystem. This latest investment brings the company’s total funding to over $75 million since its founding just last year, and it marks a significant step forward in its mission to reshape how publishers engage with generative AI.
The company also announced the launch of Gist Answers, ProRata’s new AI-as-a-service platform designed to give publishers direct control over how AI interacts with their content. Gist Answers allows media organizations to embed custom AI search, summarization, and recommendation tools directly into their websites and digital properties. Rather than watching their content be scraped and repurposed without consent, publishers can now offer AI-powered experiences on their own terms—driving deeper engagement, longer user sessions, and more meaningful interactions with their audiences.
The platform has already attracted early-access partners representing over 100 publications, a testament to the growing demand for AI tools that respect editorial integrity and support sustainable business models. Gist Answers is designed to be flexible and intuitive, allowing publishers to tailor the AI experience to their brand’s voice and editorial standards. It’s not just about delivering answers—it’s about creating a richer, more interactive layer of discovery that keeps users engaged and informed.
Beyond direct integration, ProRata is also offering publishers the opportunity to license their content to inform Gist Answers across third-party destinations. More than 700 high-quality publications around the world have already joined this initiative, contributing to a growing network of licensed content that powers AI responses with verified, attributable information. This model is underpinned by ProRata’s proprietary content attribution technology, which ensures that every piece of content used by the AI is properly credited and compensated. In doing so, the company is building a framework where human creativity is not only preserved but actively rewarded in the AI economy.
Gist Answers is designed to work seamlessly with Gist Ads, ProRata’s innovative advertising platform that transforms AI-generated responses into premium ad inventory. By placing native, conversational ads adjacent to AI answers, Gist Ads creates a format that aligns with user intent and delivers strong performance for marketers. For publishers, this means new revenue streams that are directly tied to the value of their content and the engagement it drives.
ProRata’s approach stands in stark contrast to the extractive models that have dominated the early days of generative AI. The company was founded on the belief that the work of journalists, creators, and publishers is not just data to be mined—it’s a vital source of knowledge and insight that deserves recognition, protection, and compensation. By building systems that prioritize licensing over scraping, transparency over opacity, and partnership over exploitation, ProRata is proving that AI can be both powerful and principled.
How the funding will be used: With the Series B funding, ProRata plans to scale its team, expand its product offerings, and deepen its relationships with publishers and content creators around the world. The company is focused on building tools that are not only technologically advanced but also aligned with the values of the people who produce the content that fuels AI. As generative AI continues to evolve, ProRata is positioning itself as a trusted partner for publishers seeking to navigate this new landscape with confidence and integrity.
KEY QUOTES:
“Search has always shaped how people discover knowledge, but for too long publishers have been forced to give that power away. Gist Answers changes that dynamic, bringing AI search directly to their sites, where it deepens engagement, restores control, and opens entirely new paths for discovery.”
Bill Gross, CEO and founder of ProRata
“Generative AI is reshaping search and digital advertising, creating an opportunity for a new category of infrastructure to compensate content creators whose work powers the answers we are relying on daily. ProRata is addressing this inflection point with a market-neutral model designed to become the default platform for attribution and fair monetization across the ecosystem. We believe the shift toward AI-native search experiences will unlock greater value for advertisers, publishers, and consumers alike.”
Nagraj Kashyap, General Partner, Touring Capital
“As a publisher, our priority is making sure our journalism reaches audiences in trusted ways. By contributing our content to the Gist network, we know it’s being used ethically, with full credit, while also helping adopters of Gist Answers deliver accurate, high-quality responses to their readers.”
Nicholas Thompson, CEO of The Atlantic
“The role of publishers in the AI era is to ensure that trusted journalism remains central to how people search and learn. By partnering with ProRata, we’re showing how an established brand can embrace new technology like Gist Answers to deepen engagement and demonstrate the enduring value of quality journalism.”
Andrew Perlman, CEO of Recurrent, owner of Popular Science
“Search has always been critical to how our readers find and interact with content. With Gist Answers, our audience can engage directly with us and get trusted answers sourced from our reporting, strengthened by content from a vetted network of international media outlets. Engagement is higher, and we’re able to explore new revenue opportunities that simply didn’t exist before.”
Jeremy Gulban, CEO of CherryRoad Media
“We’re really excited to be partnering with ProRata. At Arena, we’re always looking for unique and innovative ways to better serve our audience, and Gist Answers allows us to adapt to new technology in an ethical way.”
Paul Edmondson, CEO of The Arena Group, owner of Parade and Athlon Sports
Ethics & Policy
Michael Lissack’s New Book “Questioning Understanding” Explores the Future of Scientific Inquiry and AI Ethics

Photo Courtesy: Michael Lissack
“Understanding is not a destination we reach, but a spiral we climb—each new question changes the view, and each new view reveals questions we couldn’t see before.”
Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding.” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.
A Thought-Provoking Approach to Scientific Inquiry
In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.
The book features an innovative tête-bêche format, offering two entry points for readers: “Questioning → Understanding” or “Understanding → Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.
The Battle Against Sloppy Science
Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”—Dutch for “sloppy science”—characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).
Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”—Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.
“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.
“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning—and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”
AI as the New Frontier of Questioning
As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.
“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.
“It is essential that we engage with AI systems as partners in discovery—through reflective dialogue—rather than relying on them as simple solutions to complex problems.”
He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.
With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.
Recognition and Global Impact
Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.
About “Questioning Understanding”
“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tête-bêche edition—two books in one with dual covers and no single entry point—it forces readers to choose where to begin: “Questioning → Understanding” or “Understanding → Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.
As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”—science that consciously examines its own presuppositions.
Photo Courtesy: Michael Lissack
About Michael Lissack
Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.
Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)—nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.
As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”
Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com and the Second Order Science Foundation at secondorderscience.org.
Media Contact
Company Name: Digital Networking Agency
Email: Send Email
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/
Ethics & Policy
A Tipping Point in AI Ethics and Intellectual Property Markets

The recent $1.5 billion settlement between Anthropic and a coalition of book authors marks a watershed moment in the AI industry’s reckoning with intellectual property law and ethical data practices [1]. This landmark case, rooted in allegations that Anthropic trained its models using pirated books from sites like LibGen, has forced a reevaluation of how AI firms source training data—and what this means for investors seeking to capitalize on the next phase of AI innovation.
Legal Uncertainty and Ethical Clarity
Judge William Alsup’s June 2025 ruling clarified a critical distinction: while training AI on legally purchased books may qualify as transformative fair use, using pirated copies is “irredeemably infringing” [2]. This nuanced legal framework has created a dual challenge for AI developers. On one hand, it legitimizes the use of AI for creative purposes if data is lawfully acquired. On the other, it exposes companies to significant liability if their data pipelines lack transparency. For investors, this duality underscores the growing importance of ethical data sourcing as a competitive differentiator.
The settlement also highlights a broader industry trend: the rise of intermediaries facilitating data licensing. As noted by ApplyingAI, new platforms are emerging to streamline transactions between publishers and AI firms, reducing friction in a market that could see annual licensing costs reach $10 billion by 2030 [2]. This shift benefits companies with the infrastructure to navigate complex licensing ecosystems.
Strategic Investment Opportunities
The Anthropic case has accelerated demand for AI firms that prioritize ethical data practices. Several companies have already positioned themselves as leaders in this space:
- Apple (AAPL): The company’s on-device processing and differential privacy tools exemplify a user-centric approach to data ethics. Its recent AI ethics guidelines, emphasizing transparency and bias mitigation, align with regulatory expectations [1].
- Salesforce (CRM): Through its Einstein Trust Layer and academic collaborations, Salesforce is addressing bias in enterprise AI. Its expanded Office of Ethical and Humane Use of Technology signals a long-term commitment to responsible innovation [1].
- Amazon Web Services (AMZN): AWS’s SageMaker governance tools and external AI advisory council demonstrate a proactive stance on compliance. The platform’s role in enabling content policies for generative AI makes it a key player in the post-Anthropic landscape [1].
- Nvidia (NVDA): By leveraging synthetic datasets and energy-efficient GPU designs, Nvidia is addressing both ethical and environmental concerns. Its NeMo Guardrails tool further ensures compliance in AI applications [1].
These firms represent a “responsible AI” cohort that is likely to outperform peers as regulatory scrutiny intensifies. Smaller players, meanwhile, face a steeper path: startups with limited capital may struggle to secure licensing deals, creating opportunities for consolidation or innovation in alternative data generation techniques [2].
Market Risks and Regulatory Horizons
While the settlement provides some clarity, it also introduces uncertainty. As The Daily Record notes, the lack of a definitive court ruling on AI copyright means companies must navigate a “patchwork” of interpretations [3]. This ambiguity favors firms with deep legal and financial resources, such as OpenAI and Google DeepMind, which can afford to negotiate high-cost licensing agreements [2].
Investors should also monitor legislative developments. Current copyright laws, designed for a pre-AI era, are ill-equipped to address the complexities of machine learning. A 2025 report by the Brookings Institution estimates that 60% of AI-related regulations will emerge at the state level in the next two years, creating a fragmented compliance landscape [unavailable source].
The Path Forward
The Anthropic settlement is not an endpoint but a catalyst. It has forced the industry to confront a fundamental question: Can AI innovation coexist with robust intellectual property rights? For investors, the answer lies in supporting companies that embed ethical practices into their core operations.
As the market evolves, three trends will shape the next phase of AI investment:
1. Synthetic Data Generation: Firms like Nvidia and Anthropic are pioneering techniques to create training data without relying on copyrighted material.
2. Collaborative Licensing Consortia: Platforms that aggregate licensed content for AI training—such as those emerging post-settlement—will reduce transaction costs.
3. Regulatory Arbitrage: Companies that proactively align with emerging standards (e.g., the EU AI Act) will gain first-mover advantages in global markets.
In this environment, ethical data practices are no longer optional—they are a prerequisite for long-term viability. The Anthropic case has made that clear.
Source:
[1] Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI [https://www.wired.com/story/anthropic-settlement-lawsuit-copyright/]
[2] Anthropic’s Confidential Settlement: Navigating the Uncertain … [https://applyingai.com/2025/08/anthropics-confidential-settlement-navigating-the-uncertain-terrain-of-ai-copyright-law/]
[3] Anthropic settlement a big step for AI law [https://thedailyrecord.com/2025/09/02/anthropic-settlement-a-big-step-for-ai-law/]
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi