Connect with us

Ethics & Policy

Ajay Devgn Feels Debutants From Outside Film Families Are ‘Misguided’ About Bollywood: You Cannot Be Star… | Bollywood

Published

on


Ajay Devgn Feels Debutants From Outside Film Families Are ‘Misguided’ About Bollywood: You Cannot Be Star…

Bollywood star Ajay Devgn, while speaking about learning how the industry works from his father Veeru Devgan, said that outsiders are often ‘misguided’ about the workings of the film world. According to him, they have a misunderstanding when it comes to the desire to become a star.

Ajay Devgn Feels New Outsiders Are ‘Misguided’

The Son of Sardar 2 actor recently appeared on the Kapil Sharma show with co-stars Munral Thakur and Ravi Kishan. When asked if he had learnt about Bollywood from his father Veeru, Ajay replied, “Whatever I’ve learnt technically and about the industry is all because of him. The kind of dedication he had and what he taught me… The honesty towards work, all of it comes from him.”

To this, Archana Puran Singh responded, “You know, this is one of the reasons that the film industry prefers people to come in from film families because they already learn professional ethics from their parents.” Ajay agreed with her and said that debutants from film families learn a lot in the initial stages.

“I am not talking about everybody because there are sensible people, but a lot of times, they come in not knowing if they want to be an actor or a star. You cannot be a star on day one; first, you need to be an actor. So, I think there’s a misunderstanding somewhere for people; they are misguided about the industry… those from outside film families. I think ultimately, it’s your hard work,” he said.

Archana’s husband, actor Parmeet Sethi, was also a part of the conversation about outsiders.

What’s your take on this? Let us know.





Source link

Ethics & Policy

Personalization and Ethics by 2025

Published

on

By


The Rise of AI in Tailored Customer Experiences

In the fast-evolving world of marketing, artificial intelligence is reshaping how brands connect with consumers, particularly through hyper-personalized strategies that anticipate needs before they’re even voiced. As we approach 2025, companies are leveraging AI to analyze vast datasets, from browsing histories to purchase patterns, creating experiences that feel uniquely individual. This shift isn’t just about efficiency; it’s about building loyalty in an era where generic ads no longer cut through the noise.

Take Netflix, for instance, which uses AI algorithms to recommend content based on viewing habits, a model that’s inspired countless marketers. Similarly, e-commerce giants like Amazon employ predictive analytics to suggest products, boosting conversion rates significantly. According to insights from HubSpot’s blog on AI personalization strategies, these tactics can increase customer engagement by up to 20%, emphasizing the need for real-time data processing to deliver relevant offers instantaneously.

Predictive Analytics and Real-Time Adaptation

Predictive analytics stands at the forefront of this transformation, enabling marketers to forecast consumer behavior with remarkable accuracy. By 2025, AI tools are expected to integrate seamlessly with customer relationship management systems, allowing for dynamic content adjustments on the fly. For example, if a user abandons a cart, AI can trigger personalized emails with tailored discounts, drawing from past interactions to optimize timing and messaging.

Recent reports highlight this trend’s momentum. A piece from ContentGrip notes that the global AI in marketing market is projected to hit $47.32 billion in 2025, driven by a 36.6% compound annual growth rate. This growth underscores how AI isn’t merely automating tasks but enhancing strategic decision-making, with 88% of marketers already incorporating it into daily workflows.

Ethical Considerations in Data-Driven Personalization

Yet, as AI delves deeper into personalization, ethical concerns loom large. Privacy regulations like GDPR and emerging U.S. laws demand transparent data usage, pushing brands to balance customization with consent. Marketers must ensure algorithms avoid biases that could alienate segments of their audience, a point stressed in discussions on X where users debate AI’s role in fair marketing practices.

McKinsey’s analysis in their report on the next frontier of personalized marketing warns that without ethical frameworks, trust could erode. They advocate for generative AI to craft tailored narratives while respecting user boundaries, a strategy that’s gaining traction among industry leaders.

Integration with Emerging Technologies

Looking ahead, AI personalization is merging with technologies like augmented reality and voice search to create immersive experiences. Imagine virtual try-ons personalized via AI, or voice assistants recommending products based on conversational cues. Trends from ON24’s predictions for 2025 suggest that machine learning will refine these interactions, anticipating needs through pattern recognition and delivering hyper-dynamic content.

News from WebProNews echoes this, detailing how AI-driven strategies in sales and marketing are focusing on sustainability and data privacy for 2025. Their article on digital marketing trends highlights omnichannel optimization, where AI forecasts engagement across platforms, blending automation with human creativity for agile campaigns.

Challenges and Future Outlook

Despite the promise, challenges persist, including the high costs of AI implementation and the need for skilled talent. Small businesses, in particular, may struggle to compete, but accessible tools from providers like Google and IBM are democratizing access. Posts on X from influencers like Greg Isenberg discuss “vibe marketing,” where AI agents generate content calendars, revolutionizing planning and testing.

Ultimately, as Dotdigital’s blog on personalization in 2025 posits, the key lies in combining AI insights with authentic connections. Brands that master this will not only personalize but humanize their marketing, fostering long-term relationships in a data-saturated world. With innovations accelerating, 2025 could mark the pinnacle of customer-centric strategies, where AI turns every interaction into a meaningful dialogue.



Source link

Continue Reading

Ethics & Policy

Shaping Global AI Governance at the 80th UNGA

Published

on


Global AI governance is currently at a critical juncture. Rapid advancements in technology are presenting exciting opportunities but also significant challenges. The rise of AI agents — AI systems that can reason, plan, and take direct action — makes strong international cooperation more crucial than ever. To create safer and more responsible AI that benefits people and society, we must work collectively on a global scale.

Partnership on AI (PAI) has been deeply engaged in these conversations, bridging the gap between AI development and responsible policy.

Our team has crossed the globe, connecting with partners and collaborators at key events this year, from the AI Action Summit in Paris, to the World AI Conference in Shanghai, and the Global AI Summit on Africa in Kigali. This builds on the discussion at PAI’s 2024 Policy Forum and the Policy Alignment on AI Transparency report published last October, both of which explored how AI governance efforts align with one another and highlighted the need for international cooperation and coordination on AI policy.

Our journey now takes us next to the 80th session of the United Nations General Assembly (UNGA), taking place in New York this week.

In addition to marking the 80th anniversary of the UN, this year’s UNGA is a call for renewed commitment to multilateralism. It also serves as the official launch of the new UN Global Dialogue on AI Governance. The UN is a crucial piece of the global AI governance puzzle, as a universal and inclusive forum where every nation, regardless of size or influence, has a voice in shaping the future of this technology.

To celebrate this milestone anniversary, PAI is bringing together its community of Partners, policymakers, and other stakeholders for a series of events alongside the UNGA. This is a pivotal moment that demands increased global cooperation amid a challenging geopolitical environment. Our community has identified two particularly important and challenging areas for global AI governance this year:

  1. The opportunities and challenges of AI agents (with 2025 dubbed the “year of agents”) across different fields, including AI safety, human connection, and public policy
  2. The need to build a more robust global AI assurance ecosystem; AI assurance being defined as the process to assess whether an AI system or model is trustworthy

To inform these important discussions and build on our support for the UN Global Digital Compact, PAI is bringing both topics to the attention of the community of UN stakeholders through a series of UNGA side events and publications on both issues. The issues align with the mandates of two new UN AI mechanisms: the UN Independent International Scientific Panel on AI and the Global Dialogue.

The Scientific Panel is tasked with issuing “evidence-based scientific assessments” that synthesize and analyze existing research on the opportunities, risks, and impacts of AI.

Meanwhile, the role of the Global Dialogue is to discuss international cooperation, share best practices and lessons learned, and to facilitate discussions on AI governance to advance the sustainable development goals (SDGs), including on the development of trustworthy AI systems; the protection of human rights in the field of AI; and transparency, accountability, and human oversight consistent with international law.

AI agents are a new research topic that the international community needs to better understand, considering opportunities and potential risks in areas such as human oversight, transparency, and human rights. We expect this topic to be taken up by the Scientific Panel and brought to the attention of the Global Dialogue.

PAI’s work on AI agents includes three key publications:

  1. A Real-time Failure Detection Framework that provides guidance on how to monitor and thereby prevent critical failures in the deployment of autonomous AI agents, which could lead to hazards or real-world incidents that can harm people, disrupt infrastructure, or violate human rights.
  2. An International Policy Brief that offers anticipatory guidance on how to manage the potential cross-border harms and human rights impacts of AI agents, leveraging foundational global governance tools, i.e., international law, non-binding global norms, and global accountability mechanisms.
  3. A Policy Research Agenda that outlines priority questions that policymakers and the scientific community should explore to ensure that we govern AI agents in an informed manner domestically, regionally, and globally.

At the same time, we believe a robust AI assurance ecosystem is crucial to enabling trust and unlocking opportunities for adoption in line with the SDGs and international law. Both the Scientific Panel and the Global Dialogue can help fill significant research and implementation gaps in this area.

Looking ahead, we will expand our focus on AI assurance, with plans to publish a white paper, progress report, and international policy brief at the end of 2025 and early 2026. These publications will touch on issues ranging from the challenges to effective AI assurance, such as insufficient incentives and access to documentation, to AI assurance needs in the Global South.

We hope these contributions will not only inform discussions at the UN but also in other important international AI governance forums, including the OECD’s Global Partnership on AI Expert Group Meeting in November, the G20 Summit in November, and the AI Impact Summit in India next year.

The global conversation on AI governance is still in the early stages, and PAI is committed to ensuring that it is an inclusive, informed, and effective one. To stay up to date on our work in this area, sign up for our newsletter.



Source link

Continue Reading

Ethics & Policy

DeepMind CEO Warns AI May Repeat Social Media’s Harms Without Ethics

Published

on

By


In a stark warning that echoes the regrets of tech’s past, Demis Hassabis, the CEO of Google DeepMind, has cautioned that artificial intelligence risks mirroring the societal pitfalls of social media if developers don’t prioritize responsibility over rapid deployment. Speaking at the Athens Innovation Summit, Hassabis highlighted how social platforms, driven by a “move fast and break things” ethos, inadvertently fostered addiction, mental health crises, and polarized echo chambers. He urged the AI industry to learn from these errors, emphasizing the need for rigorous scientific testing and international collaboration to ensure AI enhances rather than undermines human well-being.

Hassabis, a Nobel laureate in chemistry for his work on protein structure prediction, drew parallels between AI’s potential and social media’s history. He noted that early social networks optimized for user engagement at all costs, leading to unintended consequences like misinformation spread and societal division. In AI, similar dynamics could emerge if systems are designed to “hijack attention” without safeguards, potentially amplifying biases or creating addictive interactions that prioritize metrics over ethics.

A Call for Measured Progress in AI Development

Recent studies cited by Hassabis, including those from Google DeepMind’s own research, show AI models already exhibiting patterns akin to social media’s flaws, such as generating echo chambers through personalized content. As reported in a detailed account by Business Insider, he stressed that AI’s integration into daily life— from virtual assistants to decision-making tools—demands a balanced approach. “We must not repeat the mistakes of social media,” Hassabis said, advocating for deployment strategies that incorporate ethical frameworks from the outset.

This perspective comes amid accelerating AI advancements, where companies race to release generative models without fully addressing risks. Hassabis pointed to the importance of global cooperation, suggesting frameworks similar to those in nuclear safety or aviation, where international standards prevent catastrophic failures. He argued that while innovation is crucial, unchecked speed could lead to AI systems that exacerbate inequality or mental health issues on a scale far beyond social media’s reach.

The Risks of Engagement-Driven AI Models

Industry insiders have long debated AI’s societal impact, and Hassabis’s comments align with growing concerns voiced in outlets like The Economic Times, which detailed his warnings about addiction and echo chambers. He referenced evidence from AI experiments showing how algorithms can reinforce users’ existing beliefs, much like social media feeds that trap individuals in ideological silos. This “jagged intelligence” of current AI—brilliant in narrow tasks but inconsistent overall—could worsen if not tempered by responsible practices.

Moreover, Hassabis emphasized the need for scientific rigor in AI testing, proposing that models undergo peer-reviewed evaluations before widespread release. This contrasts with the social media era, where platforms scaled globally before mitigating harms, resulting in regulatory backlashes and public distrust. As AI edges toward artificial general intelligence, potentially within 5 to 10 years according to Hassabis’s earlier statements, the stakes are higher: systems that plan and act autonomously could amplify divisions if built on flawed incentives.

Balancing Innovation with Societal Safeguards

The call for caution isn’t new, but Hassabis’s position as a leader at one of the world’s foremost AI labs lends it weight. In a piece from CNN Business, he previously downplayed fears of job displacement while highlighting broader risks like societal fragmentation. Now, he advocates for AI to be “built to benefit society,” urging developers to embed safety protocols that prevent the kind of unchecked growth that plagued social media giants.

Critics, however, question whether such self-regulation is feasible in a competitive field dominated by profit-driven entities. Hassabis countered this by pointing to DeepMind’s own initiatives, such as ethical AI guidelines and collaborations with governments. Yet, as AI becomes ubiquitous, the industry must confront whether it can avoid social media’s fate—or if history is doomed to repeat itself in more sophisticated, pervasive forms.

Toward a Responsible AI Future

Ultimately, Hassabis’s message is a blueprint for sustainable progress: prioritize user well-being, foster international standards, and reject the rush that defined social media’s rise. As echoed in reports from AP News, he sees “learning how to learn” as a key human skill in an AI-driven world, but only if technology is harnessed responsibly. For industry leaders, this serves as a timely reminder that true innovation lies not in speed, but in foresight that safeguards society from the very tools meant to advance it.



Source link

Continue Reading

Trending