Connect with us

Ethics & Policy

Sunil Narine had no interest in bowling in the nets: Manvinder Bisla

Published

on


IPL 2025, CSK vs KKR: Sunil Narine was adjudged the Player of the Match against Chennai. (Credit: BCCI)

Sunil Narine signed up for Kolkata Knight Riders (KKR) in 2012 and has been the most impactful bowler for KKR by a country mile. He has taken 203 wickets in 191 matches so far and although IPL 2025 has not been his best up until now, the Trinidad-born mystery spinner can still be a handful on a spin-friendly track. Former KKR wicket-keeper batter Manvinder Bisla, who was instrumental in KKR’s title triumph in 2012 opened up on the spinner and revealed the reason behind Narine’s sustained success in the Indian Premier League.

“He (Narine) would not bowl to batters in the nets. Firstly, he had no interest in bowling in the nets. Secondly, he did not want to be read by batters who might end up playing for another franchise a couple of years later,” Bisla revealed on the ‘Knuckleball by NDTV’ podcast.

“When I was wicket-keeping for KKR, I had to request him to bowl 12 or 13 deliveries at me so that I could read his variations, because I was the one who had to catch them and do a stumping. If I couldn’t collect it, and the ball beat both the batter and myself, it would be of no use,” Bisla added with a tinge of humour.

“After he had bowled 10-12 balls, I could understand a thing (about Narine’s variations)! ” said the former KKR wicket-keeper.

KKR won the IPL in 2012 and 2014 and both Narine and Bisla played their part in Kolkata’s success during that period under Gautam Gambhir’s captaincy. While Narine continues to be a very important member of the KKR side, Bisla moved to Royal Challengers Bangalore in 2015 and played his last IPL game there in the same year against Chennai Super Kings (CSK).

Narine, on the other hand, is no mug with the bat as well but is yet to fire for his side in this edition of the IPL. Having played 5 matches for KKR this season, Narine has picked up five wickets. His best game so far came against CSK where he returned with figures of 3 for 13 and scored 44 off 18 balls. He was adjudged Player of the Match for his brilliant show with the bat and ball. Kolkata Knight Riders will take on Punjab Kings on Tuesday, April 15, 2025, at the Maharaja Yadavindra Singh International Cricket Stadium, Mullanpur, Chandigarh.





Source link

Ethics & Policy

Michael Lissack’s New Book “Questioning Understanding” Explores the Future of Scientific Inquiry and AI Ethics

Published

on



Photo Courtesy: Michael Lissack

“Understanding is not a destination we reach, but a spiral we climb—each new question changes the view, and each new view reveals questions we couldn’t see before.”

Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding.” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.

A Thought-Provoking Approach to Scientific Inquiry

In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.

The book features an innovative tête-bêche format, offering two entry points for readers: “Questioning → Understanding” or “Understanding → Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.

The Battle Against Sloppy Science

Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”—Dutch for “sloppy science”—characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).

Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”—Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.

“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.

“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning—and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”

AI as the New Frontier of Questioning

As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.

“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.

“It is essential that we engage with AI systems as partners in discovery—through reflective dialogue—rather than relying on them as simple solutions to complex problems.”

He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.

With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.

Recognition and Global Impact

Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.

About “Questioning Understanding”

“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tête-bêche edition—two books in one with dual covers and no single entry point—it forces readers to choose where to begin: “Questioning → Understanding” or “Understanding → Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.

As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”—science that consciously examines its own presuppositions.


Photo Courtesy: Michael Lissack

About Michael Lissack

Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.

Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)—nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.

As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”

Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com and the Second Order Science Foundation at secondorderscience.org.

Media Contact
Company Name: Digital Networking Agency
Email: Send Email
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/



Source link

Continue Reading

Ethics & Policy

A Tipping Point in AI Ethics and Intellectual Property Markets

Published

on


The recent $1.5 billion settlement between Anthropic and a coalition of book authors marks a watershed moment in the AI industry’s reckoning with intellectual property law and ethical data practices [1]. This landmark case, rooted in allegations that Anthropic trained its models using pirated books from sites like LibGen, has forced a reevaluation of how AI firms source training data—and what this means for investors seeking to capitalize on the next phase of AI innovation.

Legal Uncertainty and Ethical Clarity

Judge William Alsup’s June 2025 ruling clarified a critical distinction: while training AI on legally purchased books may qualify as transformative fair use, using pirated copies is “irredeemably infringing” [2]. This nuanced legal framework has created a dual challenge for AI developers. On one hand, it legitimizes the use of AI for creative purposes if data is lawfully acquired. On the other, it exposes companies to significant liability if their data pipelines lack transparency. For investors, this duality underscores the growing importance of ethical data sourcing as a competitive differentiator.

The settlement also highlights a broader industry trend: the rise of intermediaries facilitating data licensing. As noted by ApplyingAI, new platforms are emerging to streamline transactions between publishers and AI firms, reducing friction in a market that could see annual licensing costs reach $10 billion by 2030 [2]. This shift benefits companies with the infrastructure to navigate complex licensing ecosystems.

Strategic Investment Opportunities

The Anthropic case has accelerated demand for AI firms that prioritize ethical data practices. Several companies have already positioned themselves as leaders in this space:

  1. Apple (AAPL): The company’s on-device processing and differential privacy tools exemplify a user-centric approach to data ethics. Its recent AI ethics guidelines, emphasizing transparency and bias mitigation, align with regulatory expectations [1].
  2. Salesforce (CRM): Through its Einstein Trust Layer and academic collaborations, Salesforce is addressing bias in enterprise AI. Its expanded Office of Ethical and Humane Use of Technology signals a long-term commitment to responsible innovation [1].
  3. Amazon Web Services (AMZN): AWS’s SageMaker governance tools and external AI advisory council demonstrate a proactive stance on compliance. The platform’s role in enabling content policies for generative AI makes it a key player in the post-Anthropic landscape [1].
  4. Nvidia (NVDA): By leveraging synthetic datasets and energy-efficient GPU designs, Nvidia is addressing both ethical and environmental concerns. Its NeMo Guardrails tool further ensures compliance in AI applications [1].

These firms represent a “responsible AI” cohort that is likely to outperform peers as regulatory scrutiny intensifies. Smaller players, meanwhile, face a steeper path: startups with limited capital may struggle to secure licensing deals, creating opportunities for consolidation or innovation in alternative data generation techniques [2].

Market Risks and Regulatory Horizons

While the settlement provides some clarity, it also introduces uncertainty. As The Daily Record notes, the lack of a definitive court ruling on AI copyright means companies must navigate a “patchwork” of interpretations [3]. This ambiguity favors firms with deep legal and financial resources, such as OpenAI and Google DeepMind, which can afford to negotiate high-cost licensing agreements [2].

Investors should also monitor legislative developments. Current copyright laws, designed for a pre-AI era, are ill-equipped to address the complexities of machine learning. A 2025 report by the Brookings Institution estimates that 60% of AI-related regulations will emerge at the state level in the next two years, creating a fragmented compliance landscape [unavailable source].

The Path Forward

The Anthropic settlement is not an endpoint but a catalyst. It has forced the industry to confront a fundamental question: Can AI innovation coexist with robust intellectual property rights? For investors, the answer lies in supporting companies that embed ethical practices into their core operations.

As the market evolves, three trends will shape the next phase of AI investment:
1. Synthetic Data Generation: Firms like Nvidia and Anthropic are pioneering techniques to create training data without relying on copyrighted material.
2. Collaborative Licensing Consortia: Platforms that aggregate licensed content for AI training—such as those emerging post-settlement—will reduce transaction costs.
3. Regulatory Arbitrage: Companies that proactively align with emerging standards (e.g., the EU AI Act) will gain first-mover advantages in global markets.

In this environment, ethical data practices are no longer optional—they are a prerequisite for long-term viability. The Anthropic case has made that clear.

Source:
[1] Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI [https://www.wired.com/story/anthropic-settlement-lawsuit-copyright/]
[2] Anthropic’s Confidential Settlement: Navigating the Uncertain … [https://applyingai.com/2025/08/anthropics-confidential-settlement-navigating-the-uncertain-terrain-of-ai-copyright-law/]
[3] Anthropic settlement a big step for AI law [https://thedailyrecord.com/2025/09/02/anthropic-settlement-a-big-step-for-ai-law/]



Source link

Continue Reading

Ethics & Policy

A Tipping Point in AI Ethics and Intellectual Property Markets

Published

on


The recent $1.5 billion settlement between Anthropic and a coalition of book authors marks a watershed moment in the AI industry’s reckoning with intellectual property law and ethical data practices [1]. This landmark case, rooted in allegations that Anthropic trained its models using pirated books from sites like LibGen, has forced a reevaluation of how AI firms source training data—and what this means for investors seeking to capitalize on the next phase of AI innovation.

Legal Uncertainty and Ethical Clarity

Judge William Alsup’s June 2025 ruling clarified a critical distinction: while training AI on legally purchased books may qualify as transformative fair use, using pirated copies is “irredeemably infringing” [2]. This nuanced legal framework has created a dual challenge for AI developers. On one hand, it legitimizes the use of AI for creative purposes if data is lawfully acquired. On the other, it exposes companies to significant liability if their data pipelines lack transparency. For investors, this duality underscores the growing importance of ethical data sourcing as a competitive differentiator.

The settlement also highlights a broader industry trend: the rise of intermediaries facilitating data licensing. As noted by ApplyingAI, new platforms are emerging to streamline transactions between publishers and AI firms, reducing friction in a market that could see annual licensing costs reach $10 billion by 2030 [2]. This shift benefits companies with the infrastructure to navigate complex licensing ecosystems.

Strategic Investment Opportunities

The Anthropic case has accelerated demand for AI firms that prioritize ethical data practices. Several companies have already positioned themselves as leaders in this space:

  1. Apple (AAPL): The company’s on-device processing and differential privacy tools exemplify a user-centric approach to data ethics. Its recent AI ethics guidelines, emphasizing transparency and bias mitigation, align with regulatory expectations [1].
  2. Salesforce (CRM): Through its Einstein Trust Layer and academic collaborations, Salesforce is addressing bias in enterprise AI. Its expanded Office of Ethical and Humane Use of Technology signals a long-term commitment to responsible innovation [1].
  3. Amazon Web Services (AMZN): AWS’s SageMaker governance tools and external AI advisory council demonstrate a proactive stance on compliance. The platform’s role in enabling content policies for generative AI makes it a key player in the post-Anthropic landscape [1].
  4. Nvidia (NVDA): By leveraging synthetic datasets and energy-efficient GPU designs, Nvidia is addressing both ethical and environmental concerns. Its NeMo Guardrails tool further ensures compliance in AI applications [1].

These firms represent a “responsible AI” cohort that is likely to outperform peers as regulatory scrutiny intensifies. Smaller players, meanwhile, face a steeper path: startups with limited capital may struggle to secure licensing deals, creating opportunities for consolidation or innovation in alternative data generation techniques [2].

Market Risks and Regulatory Horizons

While the settlement provides some clarity, it also introduces uncertainty. As The Daily Record notes, the lack of a definitive court ruling on AI copyright means companies must navigate a “patchwork” of interpretations [3]. This ambiguity favors firms with deep legal and financial resources, such as OpenAI and Google DeepMind, which can afford to negotiate high-cost licensing agreements [2].

Investors should also monitor legislative developments. Current copyright laws, designed for a pre-AI era, are ill-equipped to address the complexities of machine learning. A 2025 report by the Brookings Institution estimates that 60% of AI-related regulations will emerge at the state level in the next two years, creating a fragmented compliance landscape [unavailable source].

The Path Forward

The Anthropic settlement is not an endpoint but a catalyst. It has forced the industry to confront a fundamental question: Can AI innovation coexist with robust intellectual property rights? For investors, the answer lies in supporting companies that embed ethical practices into their core operations.

As the market evolves, three trends will shape the next phase of AI investment:
1. Synthetic Data Generation: Firms like Nvidia and Anthropic are pioneering techniques to create training data without relying on copyrighted material.
2. Collaborative Licensing Consortia: Platforms that aggregate licensed content for AI training—such as those emerging post-settlement—will reduce transaction costs.
3. Regulatory Arbitrage: Companies that proactively align with emerging standards (e.g., the EU AI Act) will gain first-mover advantages in global markets.

In this environment, ethical data practices are no longer optional—they are a prerequisite for long-term viability. The Anthropic case has made that clear.

Source:
[1] Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI [https://www.wired.com/story/anthropic-settlement-lawsuit-copyright/]
[2] Anthropic’s Confidential Settlement: Navigating the Uncertain … [https://applyingai.com/2025/08/anthropics-confidential-settlement-navigating-the-uncertain-terrain-of-ai-copyright-law/]
[3] Anthropic settlement a big step for AI law [https://thedailyrecord.com/2025/09/02/anthropic-settlement-a-big-step-for-ai-law/]



Source link

Continue Reading

Trending