Ethics & Policy
Conference on Values and Responsibility in AI Technologies

March 18, 2025
Gonzaga University News Service
Experts ranging from data scientists, engineers and cybersecurity professionals to philosophers, lawyers and educators will meet on the Gonzaga University campus April 3-4, 2025, to discuss the myriad ways artificial intelligence technologies are reshaping society.
Gonzaga’s Institute for Informatics and Applied Technology and GU’s Department of Philosophy are partnering to host the Value and Responsibility in AI Technologies Conference. The panelists and speakers will address the ethical questions that come along with the incredible opportunities of AI. Among the issues up for discussion are questions concerning privacy, security, fairness, safety and accountability, because while AI holds immense potential for good, there are inherent risks in its misuse or careless implementation.
“The Institute for Informatics and Applied Technology is driving the integration of AI across Gonzaga’s curriculum, student learning and partner innovation,” says Jay Yang, the institute’s inaugural director.
“We have created a conference where visionary academics and industry leaders will come together to explore the critical intersection of AI, ethics, and responsibility—shaping a future where technology is guided by human values, and human creativity is elevated by technology advances.”
The conference is free and open to the public, although space is limited and registration is required. It will offer ample opportunities for critical thought and discussion through sessions dedicated to topics including “Technology and Ethical Responsibility,” “Data Labelling in Shaping AI Technologies” and “AI Laws and Policies Around the Globe.”
“It is clear that we must take a responsible approach to the design, development, and deployment of AI technologies,” says Anthony Fisher, a lecturer in Gonzaga’s Department of Philosophy. “This conference gets to the intellectual heart of the matter and aims to find the ethical substance that ought to be at the core of responsible AI. The many layers of diversity that this conference exhibits make it a distinctive event to replicate and emulate. It will raise further questions and generate practical solutions that have the potential for wide-ranging impact on shaping debates in AI ethics and on developing AI technologies in industry.”
In addition to Gonzaga faculty members spanning its schools and colleges, representatives of tech firms Google and Adobe, as well as experts from peer higher education institutions such as Hong Kong Baptist University, the University of Minnesota-Duluth, Santa Clara University, the University of Washington, Seattle University, the University of Notre Dame and Portland State University will share their expertise.
The Value and Responsibility in AI Technologies Conference takes place Thursday-Friday, April 3-4 on the Gonzaga University campus. Thursday sessions run from 1 p.m.-4:45 p.m. in Room 103 of the John & Joan Bollier Family Center for Integrated Science & Engineering. Friday sessions run 8:30 a.m.-5 p.m. and will take place in Wolff Auditorium in the Jepson Center.
Be part of the conversation on AI
Ethics & Policy
Michael Lissack’s New Book “Questioning Understanding” Explores the Future of Scientific Inquiry and AI Ethics

Photo Courtesy: Michael Lissack
“Understanding is not a destination we reach, but a spiral we climb—each new question changes the view, and each new view reveals questions we couldn’t see before.”
Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding.” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.
A Thought-Provoking Approach to Scientific Inquiry
In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.
The book features an innovative tête-bêche format, offering two entry points for readers: “Questioning → Understanding” or “Understanding → Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.
The Battle Against Sloppy Science
Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”—Dutch for “sloppy science”—characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).
Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”—Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.
“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.
“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning—and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”
AI as the New Frontier of Questioning
As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.
“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.
“It is essential that we engage with AI systems as partners in discovery—through reflective dialogue—rather than relying on them as simple solutions to complex problems.”
He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.
With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.
Recognition and Global Impact
Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.
About “Questioning Understanding”
“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tête-bêche edition—two books in one with dual covers and no single entry point—it forces readers to choose where to begin: “Questioning → Understanding” or “Understanding → Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.
As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”—science that consciously examines its own presuppositions.
Photo Courtesy: Michael Lissack
About Michael Lissack
Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.
Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)—nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.
As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”
Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com and the Second Order Science Foundation at secondorderscience.org.
Media Contact
Company Name: Digital Networking Agency
Email: Send Email
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/
Ethics & Policy
A Tipping Point in AI Ethics and Intellectual Property Markets

The recent $1.5 billion settlement between Anthropic and a coalition of book authors marks a watershed moment in the AI industry’s reckoning with intellectual property law and ethical data practices [1]. This landmark case, rooted in allegations that Anthropic trained its models using pirated books from sites like LibGen, has forced a reevaluation of how AI firms source training data—and what this means for investors seeking to capitalize on the next phase of AI innovation.
Legal Uncertainty and Ethical Clarity
Judge William Alsup’s June 2025 ruling clarified a critical distinction: while training AI on legally purchased books may qualify as transformative fair use, using pirated copies is “irredeemably infringing” [2]. This nuanced legal framework has created a dual challenge for AI developers. On one hand, it legitimizes the use of AI for creative purposes if data is lawfully acquired. On the other, it exposes companies to significant liability if their data pipelines lack transparency. For investors, this duality underscores the growing importance of ethical data sourcing as a competitive differentiator.
The settlement also highlights a broader industry trend: the rise of intermediaries facilitating data licensing. As noted by ApplyingAI, new platforms are emerging to streamline transactions between publishers and AI firms, reducing friction in a market that could see annual licensing costs reach $10 billion by 2030 [2]. This shift benefits companies with the infrastructure to navigate complex licensing ecosystems.
Strategic Investment Opportunities
The Anthropic case has accelerated demand for AI firms that prioritize ethical data practices. Several companies have already positioned themselves as leaders in this space:
- Apple (AAPL): The company’s on-device processing and differential privacy tools exemplify a user-centric approach to data ethics. Its recent AI ethics guidelines, emphasizing transparency and bias mitigation, align with regulatory expectations [1].
- Salesforce (CRM): Through its Einstein Trust Layer and academic collaborations, Salesforce is addressing bias in enterprise AI. Its expanded Office of Ethical and Humane Use of Technology signals a long-term commitment to responsible innovation [1].
- Amazon Web Services (AMZN): AWS’s SageMaker governance tools and external AI advisory council demonstrate a proactive stance on compliance. The platform’s role in enabling content policies for generative AI makes it a key player in the post-Anthropic landscape [1].
- Nvidia (NVDA): By leveraging synthetic datasets and energy-efficient GPU designs, Nvidia is addressing both ethical and environmental concerns. Its NeMo Guardrails tool further ensures compliance in AI applications [1].
These firms represent a “responsible AI” cohort that is likely to outperform peers as regulatory scrutiny intensifies. Smaller players, meanwhile, face a steeper path: startups with limited capital may struggle to secure licensing deals, creating opportunities for consolidation or innovation in alternative data generation techniques [2].
Market Risks and Regulatory Horizons
While the settlement provides some clarity, it also introduces uncertainty. As The Daily Record notes, the lack of a definitive court ruling on AI copyright means companies must navigate a “patchwork” of interpretations [3]. This ambiguity favors firms with deep legal and financial resources, such as OpenAI and Google DeepMind, which can afford to negotiate high-cost licensing agreements [2].
Investors should also monitor legislative developments. Current copyright laws, designed for a pre-AI era, are ill-equipped to address the complexities of machine learning. A 2025 report by the Brookings Institution estimates that 60% of AI-related regulations will emerge at the state level in the next two years, creating a fragmented compliance landscape [unavailable source].
The Path Forward
The Anthropic settlement is not an endpoint but a catalyst. It has forced the industry to confront a fundamental question: Can AI innovation coexist with robust intellectual property rights? For investors, the answer lies in supporting companies that embed ethical practices into their core operations.
As the market evolves, three trends will shape the next phase of AI investment:
1. Synthetic Data Generation: Firms like Nvidia and Anthropic are pioneering techniques to create training data without relying on copyrighted material.
2. Collaborative Licensing Consortia: Platforms that aggregate licensed content for AI training—such as those emerging post-settlement—will reduce transaction costs.
3. Regulatory Arbitrage: Companies that proactively align with emerging standards (e.g., the EU AI Act) will gain first-mover advantages in global markets.
In this environment, ethical data practices are no longer optional—they are a prerequisite for long-term viability. The Anthropic case has made that clear.
Source:
[1] Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI [https://www.wired.com/story/anthropic-settlement-lawsuit-copyright/]
[2] Anthropic’s Confidential Settlement: Navigating the Uncertain … [https://applyingai.com/2025/08/anthropics-confidential-settlement-navigating-the-uncertain-terrain-of-ai-copyright-law/]
[3] Anthropic settlement a big step for AI law [https://thedailyrecord.com/2025/09/02/anthropic-settlement-a-big-step-for-ai-law/]
Ethics & Policy
A Tipping Point in AI Ethics and Intellectual Property Markets

The recent $1.5 billion settlement between Anthropic and a coalition of book authors marks a watershed moment in the AI industry’s reckoning with intellectual property law and ethical data practices [1]. This landmark case, rooted in allegations that Anthropic trained its models using pirated books from sites like LibGen, has forced a reevaluation of how AI firms source training data—and what this means for investors seeking to capitalize on the next phase of AI innovation.
Legal Uncertainty and Ethical Clarity
Judge William Alsup’s June 2025 ruling clarified a critical distinction: while training AI on legally purchased books may qualify as transformative fair use, using pirated copies is “irredeemably infringing” [2]. This nuanced legal framework has created a dual challenge for AI developers. On one hand, it legitimizes the use of AI for creative purposes if data is lawfully acquired. On the other, it exposes companies to significant liability if their data pipelines lack transparency. For investors, this duality underscores the growing importance of ethical data sourcing as a competitive differentiator.
The settlement also highlights a broader industry trend: the rise of intermediaries facilitating data licensing. As noted by ApplyingAI, new platforms are emerging to streamline transactions between publishers and AI firms, reducing friction in a market that could see annual licensing costs reach $10 billion by 2030 [2]. This shift benefits companies with the infrastructure to navigate complex licensing ecosystems.
Strategic Investment Opportunities
The Anthropic case has accelerated demand for AI firms that prioritize ethical data practices. Several companies have already positioned themselves as leaders in this space:
- Apple (AAPL): The company’s on-device processing and differential privacy tools exemplify a user-centric approach to data ethics. Its recent AI ethics guidelines, emphasizing transparency and bias mitigation, align with regulatory expectations [1].
- Salesforce (CRM): Through its Einstein Trust Layer and academic collaborations, Salesforce is addressing bias in enterprise AI. Its expanded Office of Ethical and Humane Use of Technology signals a long-term commitment to responsible innovation [1].
- Amazon Web Services (AMZN): AWS’s SageMaker governance tools and external AI advisory council demonstrate a proactive stance on compliance. The platform’s role in enabling content policies for generative AI makes it a key player in the post-Anthropic landscape [1].
- Nvidia (NVDA): By leveraging synthetic datasets and energy-efficient GPU designs, Nvidia is addressing both ethical and environmental concerns. Its NeMo Guardrails tool further ensures compliance in AI applications [1].
These firms represent a “responsible AI” cohort that is likely to outperform peers as regulatory scrutiny intensifies. Smaller players, meanwhile, face a steeper path: startups with limited capital may struggle to secure licensing deals, creating opportunities for consolidation or innovation in alternative data generation techniques [2].
Market Risks and Regulatory Horizons
While the settlement provides some clarity, it also introduces uncertainty. As The Daily Record notes, the lack of a definitive court ruling on AI copyright means companies must navigate a “patchwork” of interpretations [3]. This ambiguity favors firms with deep legal and financial resources, such as OpenAI and Google DeepMind, which can afford to negotiate high-cost licensing agreements [2].
Investors should also monitor legislative developments. Current copyright laws, designed for a pre-AI era, are ill-equipped to address the complexities of machine learning. A 2025 report by the Brookings Institution estimates that 60% of AI-related regulations will emerge at the state level in the next two years, creating a fragmented compliance landscape [unavailable source].
The Path Forward
The Anthropic settlement is not an endpoint but a catalyst. It has forced the industry to confront a fundamental question: Can AI innovation coexist with robust intellectual property rights? For investors, the answer lies in supporting companies that embed ethical practices into their core operations.
As the market evolves, three trends will shape the next phase of AI investment:
1. Synthetic Data Generation: Firms like Nvidia and Anthropic are pioneering techniques to create training data without relying on copyrighted material.
2. Collaborative Licensing Consortia: Platforms that aggregate licensed content for AI training—such as those emerging post-settlement—will reduce transaction costs.
3. Regulatory Arbitrage: Companies that proactively align with emerging standards (e.g., the EU AI Act) will gain first-mover advantages in global markets.
In this environment, ethical data practices are no longer optional—they are a prerequisite for long-term viability. The Anthropic case has made that clear.
Source:
[1] Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI [https://www.wired.com/story/anthropic-settlement-lawsuit-copyright/]
[2] Anthropic’s Confidential Settlement: Navigating the Uncertain … [https://applyingai.com/2025/08/anthropics-confidential-settlement-navigating-the-uncertain-terrain-of-ai-copyright-law/]
[3] Anthropic settlement a big step for AI law [https://thedailyrecord.com/2025/09/02/anthropic-settlement-a-big-step-for-ai-law/]
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi