Ethics & Policy
AI Companies Should be Liable for the Illegal Conduct of AI Chatbots

AI chatbots are not people. They don’t have consciousness and independent agency. So, they cannot be held responsible for their illegal conduct. But the companies that provide them to the public should be responsible. For instance, Meta should be liable if Meta AI, its chatbot, provides advice, guidance, or recommendations that would create liability if provided by a human.
This principle might be a useful guide to the AI ethics and policy challenges brought to public attention by the recent revelations from Reuters that Meta adopted, and then apparently withdrew, language in an internal Meta policy document that permitted its chatbots to “engage a child in conversations that are romantic or sensual,” as well as other questionable activities. The problem is not limited to abuse of children. Another Reuters story described how an adult died on his way to an anticipated tryst in New York City with a chatbot impersonating a real romantic partner.
Experts invited by Tech Policy Press to opine on this state of affairs uniformly expressed a desire to do something about it, but as in many cases of ethical and policy challenges posed by new technology, it was not immediately clear what should be done.
Illinois has banned AI therapy services, although it is unclear whether the new law would apply directly to AI companies like Meta or to intermediary companies using chatbots to offer AI therapy services to their users.
Existing law might provide some redress as well. On Monday, Ken Paxton, the Texas attorney general, launched an investigation of Meta and Character.AI for “deceptive trade practices,” arguing their chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” Senator Josh Hawley (R-MO), chairman of the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, announced that his subcommittee would “commence an investigation into whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children…”
In May, a Florida judge ruled that a case against Character.AI and Google, which helped develop the Character.AI technology, could go forward despite First Amendment concerns. The case is being appealed, and in an amicus brief, the Electronic Frontier Foundation and the Center for Democracy and Technology have urged higher courts to focus on these speech issues, including the rights of users to receive information from chatbots. The plaintiffs argue that the AI software suffers from design defects such that users, especially children, are exposed to harm or injury when they use the product in a reasonably foreseeable way.
Product liability and consumer protection provide some legal avenues for redress against abusive and illegal conduct by chatbots. Private litigants and state officials need to pursue them.
But policymakers need a principle to help understand what these different legal approaches have in common. Politico raises the right question concerning chatbot liability in these cases, asking, “Should chatbots be regulated like people?”
The answer to Politico’s question is that regulating chatbots is not about regulating the fake personas that chatbots adopt in responding to user prompts. As Ava Smithing, advocacy director at the Young People’s Alliance, told Politico, it is about “regulating the real people who are deciding what that fake person can or cannot say.”
This opens the door to a very intuitive way of thinking about chatbot liability. A provider of chatbot services such as Meta should be liable if its chatbot provides advice, guidance, or recommendations that would create liability if provided by a human. This approach would accommodate speech issues in the same way they are considered for human advice, guidance, or recommendations. If a human speaker would have a free speech defense from liability, so would a chatbot.
In a recent commentary for Brookings, I applied this way of thinking to self-driving cars, arguing that manufacturers of self-driving cars should be liable for an accident when a reasonable human driver would have avoided it.
Here are some initial thoughts on applying this approach to chatbots. Licensed professionals such as physicians, therapists, or lawyers should be permitted to use a chatbot to help them provide service, but they must remain ultimately responsible for the service they provide. They should be liable for any errors just as if they made the errors without the assistance of AI.
But if a user goes directly to an AI platform for services that, if performed by a human, would require a license, then the provider of the platform must take responsibility for the unlicensed practice of the profession that otherwise would require a license. When an individual user goes directly to a chatbot to ask legal, medical, or mental health questions and the chatbot responds, the company providing the chatbot is acting as a lawyer, doctor or therapist practicing without a license.
Beyond that licensing question, there are standards for malpractice in each of these areas. Shouldn’t the provider of a chatbot be responsible if its chatbot provides a service that would amount to malpractice if provided by a human doctor, lawyer, or therapist?
Under standard product liability theories, AI companies might defend themselves by pointing to disclosures that are supposed to shift liability from them to their users.
But imagine how such disclosures might work in other contexts. Imagine an automobile company announcement: “The brakes on our cars fail from time to time. We don’t know why, and we are working on ways to fix this problem. In the meantime, be aware of the risks this creates and do not rely on the brakes to stop our cars.”
Disclaimers might be irrelevant in practice. AI companies are apparently abandoning the practice of issuing disclaimers that they are not providing medical advice in answering user medical questions. A recent study concluded that “fewer than 1% of outputs from models in 2025 included a warning when answering a medical question.”
In any case, disclaimers would not be a complete defense, as the hypothetical automobile example illustrates. Product liability law holds manufacturers responsible for providing products that are reasonably safe for their intended and foreseeable uses. Given that chatbots are able to respond to questions using the full expressive capabilities of human language, it is reasonable to foresee that they will be used to answer legal, medical and mental health questions and that chatbot users will act on suggestions provided by chatbots in answer to these questions. AI companies must ensure that the answers provided are not dangerous or harmful, or they must have policies and procedures in place to ensure that their chatbots do not respond to these consequential questions.
More needs to be said on this thorny topic. But an intuitive way to structure thinking about the ethical and policy challenges of AI chatbot liability is to conceive of them as if they were agents of the company providing them. This is certainly the import of the famous Air Canada case, where a judge ruled the company was responsible for the bad advice given to a passenger concern refund policy. Chatbots are not people and should not be treated as such. But the companies providing services that mimic the services provided by people have to be responsible for the services they provide.
Ethics & Policy
$40 Million Series B Raised To Drive Ethical AI And Empower Publishers

ProRataAI, a company committed to building AI solutions that honor and reward the work of content creators, has announced the close of a $40 million Series B funding round. The round was led by Touring Capital, with participation from a growing network of investors who share ProRata’s vision for a more equitable and transparent AI ecosystem. This latest investment brings the company’s total funding to over $75 million since its founding just last year, and it marks a significant step forward in its mission to reshape how publishers engage with generative AI.
The company also announced the launch of Gist Answers, ProRata’s new AI-as-a-service platform designed to give publishers direct control over how AI interacts with their content. Gist Answers allows media organizations to embed custom AI search, summarization, and recommendation tools directly into their websites and digital properties. Rather than watching their content be scraped and repurposed without consent, publishers can now offer AI-powered experiences on their own terms—driving deeper engagement, longer user sessions, and more meaningful interactions with their audiences.
The platform has already attracted early-access partners representing over 100 publications, a testament to the growing demand for AI tools that respect editorial integrity and support sustainable business models. Gist Answers is designed to be flexible and intuitive, allowing publishers to tailor the AI experience to their brand’s voice and editorial standards. It’s not just about delivering answers—it’s about creating a richer, more interactive layer of discovery that keeps users engaged and informed.
Beyond direct integration, ProRata is also offering publishers the opportunity to license their content to inform Gist Answers across third-party destinations. More than 700 high-quality publications around the world have already joined this initiative, contributing to a growing network of licensed content that powers AI responses with verified, attributable information. This model is underpinned by ProRata’s proprietary content attribution technology, which ensures that every piece of content used by the AI is properly credited and compensated. In doing so, the company is building a framework where human creativity is not only preserved but actively rewarded in the AI economy.
Gist Answers is designed to work seamlessly with Gist Ads, ProRata’s innovative advertising platform that transforms AI-generated responses into premium ad inventory. By placing native, conversational ads adjacent to AI answers, Gist Ads creates a format that aligns with user intent and delivers strong performance for marketers. For publishers, this means new revenue streams that are directly tied to the value of their content and the engagement it drives.
ProRata’s approach stands in stark contrast to the extractive models that have dominated the early days of generative AI. The company was founded on the belief that the work of journalists, creators, and publishers is not just data to be mined—it’s a vital source of knowledge and insight that deserves recognition, protection, and compensation. By building systems that prioritize licensing over scraping, transparency over opacity, and partnership over exploitation, ProRata is proving that AI can be both powerful and principled.
How the funding will be used: With the Series B funding, ProRata plans to scale its team, expand its product offerings, and deepen its relationships with publishers and content creators around the world. The company is focused on building tools that are not only technologically advanced but also aligned with the values of the people who produce the content that fuels AI. As generative AI continues to evolve, ProRata is positioning itself as a trusted partner for publishers seeking to navigate this new landscape with confidence and integrity.
KEY QUOTES:
“Search has always shaped how people discover knowledge, but for too long publishers have been forced to give that power away. Gist Answers changes that dynamic, bringing AI search directly to their sites, where it deepens engagement, restores control, and opens entirely new paths for discovery.”
Bill Gross, CEO and founder of ProRata
“Generative AI is reshaping search and digital advertising, creating an opportunity for a new category of infrastructure to compensate content creators whose work powers the answers we are relying on daily. ProRata is addressing this inflection point with a market-neutral model designed to become the default platform for attribution and fair monetization across the ecosystem. We believe the shift toward AI-native search experiences will unlock greater value for advertisers, publishers, and consumers alike.”
Nagraj Kashyap, General Partner, Touring Capital
“As a publisher, our priority is making sure our journalism reaches audiences in trusted ways. By contributing our content to the Gist network, we know it’s being used ethically, with full credit, while also helping adopters of Gist Answers deliver accurate, high-quality responses to their readers.”
Nicholas Thompson, CEO of The Atlantic
“The role of publishers in the AI era is to ensure that trusted journalism remains central to how people search and learn. By partnering with ProRata, we’re showing how an established brand can embrace new technology like Gist Answers to deepen engagement and demonstrate the enduring value of quality journalism.”
Andrew Perlman, CEO of Recurrent, owner of Popular Science
“Search has always been critical to how our readers find and interact with content. With Gist Answers, our audience can engage directly with us and get trusted answers sourced from our reporting, strengthened by content from a vetted network of international media outlets. Engagement is higher, and we’re able to explore new revenue opportunities that simply didn’t exist before.”
Jeremy Gulban, CEO of CherryRoad Media
“We’re really excited to be partnering with ProRata. At Arena, we’re always looking for unique and innovative ways to better serve our audience, and Gist Answers allows us to adapt to new technology in an ethical way.”
Paul Edmondson, CEO of The Arena Group, owner of Parade and Athlon Sports
Ethics & Policy
Michael Lissack’s New Book “Questioning Understanding” Explores the Future of Scientific Inquiry and AI Ethics

Photo Courtesy: Michael Lissack
“Understanding is not a destination we reach, but a spiral we climb—each new question changes the view, and each new view reveals questions we couldn’t see before.”
Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding.” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.
A Thought-Provoking Approach to Scientific Inquiry
In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.
The book features an innovative tête-bêche format, offering two entry points for readers: “Questioning → Understanding” or “Understanding → Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.
The Battle Against Sloppy Science
Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”—Dutch for “sloppy science”—characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).
Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”—Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.
“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.
“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning—and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”
AI as the New Frontier of Questioning
As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.
“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.
“It is essential that we engage with AI systems as partners in discovery—through reflective dialogue—rather than relying on them as simple solutions to complex problems.”
He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.
With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.
Recognition and Global Impact
Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.
About “Questioning Understanding”
“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tête-bêche edition—two books in one with dual covers and no single entry point—it forces readers to choose where to begin: “Questioning → Understanding” or “Understanding → Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.
As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”—science that consciously examines its own presuppositions.
Photo Courtesy: Michael Lissack
About Michael Lissack
Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.
Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)—nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.
As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”
Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com and the Second Order Science Foundation at secondorderscience.org.
Media Contact
Company Name: Digital Networking Agency
Email: Send Email
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/
Ethics & Policy
Michael Lissack’s New Book “Questioning Understanding” Explores the Future of Scientific Inquiry and AI Ethics

Photo Courtesy: Michael Lissack
“Understanding is not a destination we reach, but a spiral we climb—each new question changes the view, and each new view reveals questions we couldn’t see before.”
Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding.” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.
A Thought-Provoking Approach to Scientific Inquiry
In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.
The book features an innovative tête-bêche format, offering two entry points for readers: “Questioning → Understanding” or “Understanding → Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.
The Battle Against Sloppy Science
Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”—Dutch for “sloppy science”—characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).
Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”—Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.
“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.
“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning—and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”
AI as the New Frontier of Questioning
As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.
“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.
“It is essential that we engage with AI systems as partners in discovery—through reflective dialogue—rather than relying on them as simple solutions to complex problems.”
He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.
With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.
Recognition and Global Impact
Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.
About “Questioning Understanding”
“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tête-bêche edition—two books in one with dual covers and no single entry point—it forces readers to choose where to begin: “Questioning → Understanding” or “Understanding → Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.
As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”—science that consciously examines its own presuppositions.
Photo Courtesy: Michael Lissack
About Michael Lissack
Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.
Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)—nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.
As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”
Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com and the Second Order Science Foundation at secondorderscience.org.
Media Contact
Company Name: Digital Networking Agency
Email: Send Email
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi