Ethics & Policy
Mamta Kulkarni REACTS To Mahamandleshwar Of Kinnar Akhada Controversy After Her Appointment: It Was All… | Bollywood

Mamta Kulkarni REACTS To Mahamandleshwar Of Kinnar Akhada Controversy After Her Appointment: It Was All…
Former Bollywood actress Mamta Kulkarni recently addressed the controversy surrounding her appointment as Mahamandleshwar of Kinnar Akhada at the Maha Kumbh Mela in Prayagraj. Back in January, Laxmi Narayan Tripathi, the Acharya Mahamandleshwar of the Kinnar Akhada, had declared that Mamta Kulkarni had taken on a spiritual path and was named Mahamandleshwar. However, this move quickly stirred debate.
Shortly after the announcement, Rishi Ajay Das, the founder of the Kinnar Akhada, publicly expelled both Mamta and Laxmi Narayan from the group. The reason? Laxmi Narayan had reportedly appointed Mamta without informing or getting approval from the Akhada’s founding members. The decision caused quite a stir. The incident has sparked widespread discussion, with many debating the process behind such spiritual appointments, especially when involving public figures. Now, Mamta reacted to the row.
Mamta Kulkarni on Kinnar Akhada row after her appointment
Speaking to ANI, she said, “…It was all in God’s hands for me to become Mahamandaleshwar in that Kumbh, which was such a holy occasion in 140 years. God provided me with the fruits of 25 years of my ‘tapasya’. So, that happened.”
According to a press release issued on January 30, 2025, Rishi Ajay Das noted, “As the founder of Kinnar Akhada, I am hereby relieving Acharya Mahamandaleshwar Lakshmi Narayan Tripathi from his position as Acharya Mahamandaleshwar of the Kinnar Akhada, effective immediately. His appointment was made with the goal of promoting religious activities and uplifting the transgender community, but he has deviated from these responsibilities.”
The controversy revolves around an agreement Lakshmi entered into with Juna Akhada in 2019, which Ajay Das claims was allegedly done without his approval. He further alleged that the contract between the two Akhadas was legally invalid due to the lack of his consent and signature. Moreover, Ajay Das accused Lakshmi of undermining the tenets of the Kinnar Akhada by allowing Mamta to join and take on the prestigious role of Mahamandaleshwar despite her past involvement in criminal activities.
Ajay Das explained that Mamta Kulkarni’s appointment was particularly concerning because she had a criminal history. “By giving such a person the title of Mahamandaleshwar, what kind of guru are you offering to Sanatan Dharma? This is a question of ethics,” he wrote.
The founder emphasised that this appointment was not only unethical but also a betrayal of the Akhada’s religious values. The expulsion of both individuals ignited debates within the spiritual community, with the President of the Akhil Bharatiya Akhada Parishad, Mahant Ravindra Puri, speaking out in support of Lakshmi and Mamta.
Ravindra Puri challenged the legitimacy of Ajay Das’s decision, saying, “I want to ask, who is he (Rishi Ajay Das) to expel Laxmi Narayan Tripathi?” He also reiterated that both Lakshmi and Mamta would continue their roles within the Akhada and participate in the upcoming Amrit Snan.
The controversy over Mamta’s appointment as Mahamandaleshwar began when Lakshmi publicly announced the decision during the Maha Kumbh.
Mamta, who was known for her roles in popular 1990s Bollywood films, had stepped away from the limelight in the early 2000s.
However, she made a return to India and was granted the position of Mahamandaleshwar by Lakshmi, an act which has now come under heavy scrutiny.
Transgender Kathavachak Jagatguru Himangi Sakhi Maa had earlier raised concerns over Mamta’s appointment, questioning her credibility and linking her past to criminal activities. “Mamta Kulkarni has been made Mahamandaleshwar for publicity. Society knows her past very well. She was even jailed in the past in connection with drug cases. This needs investigation,” Himangi Sakhi said in a conversation with ANI.
Mamta later stepped down from her role as Mahamandaleshwar of the Kinnar Akhada after facing heavy backlash and internal disputes. Her resignation followed questions about her spiritual standing and her past in the film industry.
The Kinnar Akhada had earlier expelled Lakshmi and Mamta, citing tensions within the religious group. Mamta announced her resignation in a video shared on her Instagram.
Inputs Credit: ANI
Ethics & Policy
Personalization and Ethics by 2025

The Rise of AI in Tailored Customer Experiences
In the fast-evolving world of marketing, artificial intelligence is reshaping how brands connect with consumers, particularly through hyper-personalized strategies that anticipate needs before they’re even voiced. As we approach 2025, companies are leveraging AI to analyze vast datasets, from browsing histories to purchase patterns, creating experiences that feel uniquely individual. This shift isn’t just about efficiency; it’s about building loyalty in an era where generic ads no longer cut through the noise.
Take Netflix, for instance, which uses AI algorithms to recommend content based on viewing habits, a model that’s inspired countless marketers. Similarly, e-commerce giants like Amazon employ predictive analytics to suggest products, boosting conversion rates significantly. According to insights from HubSpot’s blog on AI personalization strategies, these tactics can increase customer engagement by up to 20%, emphasizing the need for real-time data processing to deliver relevant offers instantaneously.
Predictive Analytics and Real-Time Adaptation
Predictive analytics stands at the forefront of this transformation, enabling marketers to forecast consumer behavior with remarkable accuracy. By 2025, AI tools are expected to integrate seamlessly with customer relationship management systems, allowing for dynamic content adjustments on the fly. For example, if a user abandons a cart, AI can trigger personalized emails with tailored discounts, drawing from past interactions to optimize timing and messaging.
Recent reports highlight this trend’s momentum. A piece from ContentGrip notes that the global AI in marketing market is projected to hit $47.32 billion in 2025, driven by a 36.6% compound annual growth rate. This growth underscores how AI isn’t merely automating tasks but enhancing strategic decision-making, with 88% of marketers already incorporating it into daily workflows.
Ethical Considerations in Data-Driven Personalization
Yet, as AI delves deeper into personalization, ethical concerns loom large. Privacy regulations like GDPR and emerging U.S. laws demand transparent data usage, pushing brands to balance customization with consent. Marketers must ensure algorithms avoid biases that could alienate segments of their audience, a point stressed in discussions on X where users debate AI’s role in fair marketing practices.
McKinsey’s analysis in their report on the next frontier of personalized marketing warns that without ethical frameworks, trust could erode. They advocate for generative AI to craft tailored narratives while respecting user boundaries, a strategy that’s gaining traction among industry leaders.
Integration with Emerging Technologies
Looking ahead, AI personalization is merging with technologies like augmented reality and voice search to create immersive experiences. Imagine virtual try-ons personalized via AI, or voice assistants recommending products based on conversational cues. Trends from ON24’s predictions for 2025 suggest that machine learning will refine these interactions, anticipating needs through pattern recognition and delivering hyper-dynamic content.
News from WebProNews echoes this, detailing how AI-driven strategies in sales and marketing are focusing on sustainability and data privacy for 2025. Their article on digital marketing trends highlights omnichannel optimization, where AI forecasts engagement across platforms, blending automation with human creativity for agile campaigns.
Challenges and Future Outlook
Despite the promise, challenges persist, including the high costs of AI implementation and the need for skilled talent. Small businesses, in particular, may struggle to compete, but accessible tools from providers like Google and IBM are democratizing access. Posts on X from influencers like Greg Isenberg discuss “vibe marketing,” where AI agents generate content calendars, revolutionizing planning and testing.
Ultimately, as Dotdigital’s blog on personalization in 2025 posits, the key lies in combining AI insights with authentic connections. Brands that master this will not only personalize but humanize their marketing, fostering long-term relationships in a data-saturated world. With innovations accelerating, 2025 could mark the pinnacle of customer-centric strategies, where AI turns every interaction into a meaningful dialogue.
Ethics & Policy
Shaping Global AI Governance at the 80th UNGA

Global AI governance is currently at a critical juncture. Rapid advancements in technology are presenting exciting opportunities but also significant challenges. The rise of AI agents — AI systems that can reason, plan, and take direct action — makes strong international cooperation more crucial than ever. To create safer and more responsible AI that benefits people and society, we must work collectively on a global scale.
Partnership on AI (PAI) has been deeply engaged in these conversations, bridging the gap between AI development and responsible policy.
Our team has crossed the globe, connecting with partners and collaborators at key events this year, from the AI Action Summit in Paris, to the World AI Conference in Shanghai, and the Global AI Summit on Africa in Kigali. This builds on the discussion at PAI’s 2024 Policy Forum and the Policy Alignment on AI Transparency report published last October, both of which explored how AI governance efforts align with one another and highlighted the need for international cooperation and coordination on AI policy.
Our journey now takes us next to the 80th session of the United Nations General Assembly (UNGA), taking place in New York this week.
In addition to marking the 80th anniversary of the UN, this year’s UNGA is a call for renewed commitment to multilateralism. It also serves as the official launch of the new UN Global Dialogue on AI Governance. The UN is a crucial piece of the global AI governance puzzle, as a universal and inclusive forum where every nation, regardless of size or influence, has a voice in shaping the future of this technology.
To celebrate this milestone anniversary, PAI is bringing together its community of Partners, policymakers, and other stakeholders for a series of events alongside the UNGA. This is a pivotal moment that demands increased global cooperation amid a challenging geopolitical environment. Our community has identified two particularly important and challenging areas for global AI governance this year:
- The opportunities and challenges of AI agents (with 2025 dubbed the “year of agents”) across different fields, including AI safety, human connection, and public policy
- The need to build a more robust global AI assurance ecosystem; AI assurance being defined as the process to assess whether an AI system or model is trustworthy
To inform these important discussions and build on our support for the UN Global Digital Compact, PAI is bringing both topics to the attention of the community of UN stakeholders through a series of UNGA side events and publications on both issues. The issues align with the mandates of two new UN AI mechanisms: the UN Independent International Scientific Panel on AI and the Global Dialogue.
The Scientific Panel is tasked with issuing “evidence-based scientific assessments” that synthesize and analyze existing research on the opportunities, risks, and impacts of AI.
Meanwhile, the role of the Global Dialogue is to discuss international cooperation, share best practices and lessons learned, and to facilitate discussions on AI governance to advance the sustainable development goals (SDGs), including on the development of trustworthy AI systems; the protection of human rights in the field of AI; and transparency, accountability, and human oversight consistent with international law.
AI agents are a new research topic that the international community needs to better understand, considering opportunities and potential risks in areas such as human oversight, transparency, and human rights. We expect this topic to be taken up by the Scientific Panel and brought to the attention of the Global Dialogue.
PAI’s work on AI agents includes three key publications:
- A Real-time Failure Detection Framework that provides guidance on how to monitor and thereby prevent critical failures in the deployment of autonomous AI agents, which could lead to hazards or real-world incidents that can harm people, disrupt infrastructure, or violate human rights.
- An International Policy Brief that offers anticipatory guidance on how to manage the potential cross-border harms and human rights impacts of AI agents, leveraging foundational global governance tools, i.e., international law, non-binding global norms, and global accountability mechanisms.
- A Policy Research Agenda that outlines priority questions that policymakers and the scientific community should explore to ensure that we govern AI agents in an informed manner domestically, regionally, and globally.
At the same time, we believe a robust AI assurance ecosystem is crucial to enabling trust and unlocking opportunities for adoption in line with the SDGs and international law. Both the Scientific Panel and the Global Dialogue can help fill significant research and implementation gaps in this area.
Looking ahead, we will expand our focus on AI assurance, with plans to publish a white paper, progress report, and international policy brief at the end of 2025 and early 2026. These publications will touch on issues ranging from the challenges to effective AI assurance, such as insufficient incentives and access to documentation, to AI assurance needs in the Global South.
We hope these contributions will not only inform discussions at the UN but also in other important international AI governance forums, including the OECD’s Global Partnership on AI Expert Group Meeting in November, the G20 Summit in November, and the AI Impact Summit in India next year.
The global conversation on AI governance is still in the early stages, and PAI is committed to ensuring that it is an inclusive, informed, and effective one. To stay up to date on our work in this area, sign up for our newsletter.
Ethics & Policy
DeepMind CEO Warns AI May Repeat Social Media’s Harms Without Ethics

In a stark warning that echoes the regrets of tech’s past, Demis Hassabis, the CEO of Google DeepMind, has cautioned that artificial intelligence risks mirroring the societal pitfalls of social media if developers don’t prioritize responsibility over rapid deployment. Speaking at the Athens Innovation Summit, Hassabis highlighted how social platforms, driven by a “move fast and break things” ethos, inadvertently fostered addiction, mental health crises, and polarized echo chambers. He urged the AI industry to learn from these errors, emphasizing the need for rigorous scientific testing and international collaboration to ensure AI enhances rather than undermines human well-being.
Hassabis, a Nobel laureate in chemistry for his work on protein structure prediction, drew parallels between AI’s potential and social media’s history. He noted that early social networks optimized for user engagement at all costs, leading to unintended consequences like misinformation spread and societal division. In AI, similar dynamics could emerge if systems are designed to “hijack attention” without safeguards, potentially amplifying biases or creating addictive interactions that prioritize metrics over ethics.
A Call for Measured Progress in AI Development
Recent studies cited by Hassabis, including those from Google DeepMind’s own research, show AI models already exhibiting patterns akin to social media’s flaws, such as generating echo chambers through personalized content. As reported in a detailed account by Business Insider, he stressed that AI’s integration into daily life— from virtual assistants to decision-making tools—demands a balanced approach. “We must not repeat the mistakes of social media,” Hassabis said, advocating for deployment strategies that incorporate ethical frameworks from the outset.
This perspective comes amid accelerating AI advancements, where companies race to release generative models without fully addressing risks. Hassabis pointed to the importance of global cooperation, suggesting frameworks similar to those in nuclear safety or aviation, where international standards prevent catastrophic failures. He argued that while innovation is crucial, unchecked speed could lead to AI systems that exacerbate inequality or mental health issues on a scale far beyond social media’s reach.
The Risks of Engagement-Driven AI Models
Industry insiders have long debated AI’s societal impact, and Hassabis’s comments align with growing concerns voiced in outlets like The Economic Times, which detailed his warnings about addiction and echo chambers. He referenced evidence from AI experiments showing how algorithms can reinforce users’ existing beliefs, much like social media feeds that trap individuals in ideological silos. This “jagged intelligence” of current AI—brilliant in narrow tasks but inconsistent overall—could worsen if not tempered by responsible practices.
Moreover, Hassabis emphasized the need for scientific rigor in AI testing, proposing that models undergo peer-reviewed evaluations before widespread release. This contrasts with the social media era, where platforms scaled globally before mitigating harms, resulting in regulatory backlashes and public distrust. As AI edges toward artificial general intelligence, potentially within 5 to 10 years according to Hassabis’s earlier statements, the stakes are higher: systems that plan and act autonomously could amplify divisions if built on flawed incentives.
Balancing Innovation with Societal Safeguards
The call for caution isn’t new, but Hassabis’s position as a leader at one of the world’s foremost AI labs lends it weight. In a piece from CNN Business, he previously downplayed fears of job displacement while highlighting broader risks like societal fragmentation. Now, he advocates for AI to be “built to benefit society,” urging developers to embed safety protocols that prevent the kind of unchecked growth that plagued social media giants.
Critics, however, question whether such self-regulation is feasible in a competitive field dominated by profit-driven entities. Hassabis countered this by pointing to DeepMind’s own initiatives, such as ethical AI guidelines and collaborations with governments. Yet, as AI becomes ubiquitous, the industry must confront whether it can avoid social media’s fate—or if history is doomed to repeat itself in more sophisticated, pervasive forms.
Toward a Responsible AI Future
Ultimately, Hassabis’s message is a blueprint for sustainable progress: prioritize user well-being, foster international standards, and reject the rush that defined social media’s rise. As echoed in reports from AP News, he sees “learning how to learn” as a key human skill in an AI-driven world, but only if technology is harnessed responsibly. For industry leaders, this serves as a timely reminder that true innovation lies not in speed, but in foresight that safeguards society from the very tools meant to advance it.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries