Connect with us

Ethics & Policy

Amaal Mallik Supports India’s OTT Ban On Pakistani Content: Nation Before Art Or Artists | Bollywood

Published

on


Amaal Mallik Supports India’s OTT Ban On Pakistani Content: Nation Before Art Or Artists

On Thursday evening (May 8), the Government of India issued a fresh advisory to OTT platforms, streaming services, and intermediaries concerning content sourced from Pakistan. Citing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the advisory emphasises that platforms must refrain from hosting or promoting any material that could compromise India’s sovereignty, integrity, or national interests. Now, singer Amaal Mallik has come out in support of the ban against Pakistani content and said that the ‘Nation is before any art or artist.’

Amaal Mallik shows support to India’s OTT ban on Pakistani content

Taking to X (formerly known as Twitter), Amaal shared the official guidelines by government of India and wrote, “We are a nation with immense resolve. Despite being disappointed by their military and political policies, we have liked and adored Pakistani artists and welcomed them with open arms on numerous occasions.”

He added, “Yes, I understand that artistry transcends boundaries, but when it comes to our country, we must remain unified. In this time there can be no question on our patriotism. Our nation comes before any art or artists. We all must follow the law and directive to not collaborate or consume any Pakistani Content.”

As he concluded his note, Amaal requested artists and labels to follow the guidelines. He penned, “I hope the artists, international large corporations and labels set up in India listen and follow suit! #JaiHind #IndiaPakistanWar.”

Indian government’s advisory to OTT platforms

In the guidelines issued by the Government of India, the OTT platforms have also been instructed to steer clear of content originating from Pakistan, especially if it poses a threat to national security or could strain diplomatic relations with other countries. Additionally, the advisory warns against streaming material that could incite violence or disrupt public order. As a result, all streaming services operating in India have been directed to immediately remove Pakistan-origin content—be it films, web series, songs, or podcasts, regardless of whether they are part of paid subscriptions or available for free.

This move follows a string of recent terror attacks linked to Pakistan-based groups, including the deadly assault in Pahalgam on April 22, 2025, which claimed the lives of innocent tourists and locals.

Amaal Mallik’s work front

Amaal has composed music for hit films such as Jai Ho, Badrinath Ki Dulhania, Noor, Chef, Golmaal Again, and Mubarakan. He has also lent his voice to several tracks, including Aashiq Surrender Hua from Badrinath Ki Dulhania and Tu Mera Nahin from his debut album.





Source link

Ethics & Policy

A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action

Published

on


A policy roundtable in Tokyo and a workshop in Bangkok deepened the dialogue between Southeast Asia and the OECD, fostering collaboration on AI governance across countries, sectors, and policy communities.

A dialogue strengthened by two dynamics

Southeast Asia is rapidly emerging as a vibrant hub for Artificial Intelligence. From Indonesia and Thailand to Singapore, and Viet Nam, governments have launched national AI strategies, and ASEAN published a Guide on AI Governance and Ethics to promote consistency of AI frameworks across jurisdictions.

At the same time, OECD’s engagement with Southeast Asia is strengthening. The 2024 Ministerial Council Meeting highlighted the region as a priority for OECD global relations, coinciding with the tenth anniversary of the Southeast Asia Regional Programme (SEARP) and the initiation of the accession processes for Indonesia and Thailand.

Together, these dynamics open new avenues for practical cooperation on trustworthy, safe and secure AI.

In 2025, this momentum translated into two concrete engagement initiatives: a policy roundtable in Tokyo in May and a co-creation workshop for the OECD AI Policy Toolkit in Bangkok in August. Both events shaped regional dialogues on AI governance and helped to bridge the gap between technical expertise and policy design.

Japan actively supported both initiatives, demonstrating a strong commitment to regional AI governance. At the OECD SEARP Regional Forum in Bangkok, Japan expressed hope that AI would become a new pillar of OECD–Southeast Asia cooperation, highlighting the Tokyo Policy Roundtable on AI as the first of many such initiatives. Subsequently, Japan supported the co-creation workshop in Bangkok in August, helping to ensure a regional focus and high-level engagement across Southeast Asia.

The Tokyo roundtable enabled discussions on AI in agriculture, natural disaster management, national strategies and more

On 26 May 2025, the OECD and its Tokyo Office held a regional policy roundtable, bringing together over 80 experts and policymakers from Japan, Korea, Southeast Asia, and the ASEAN Secretariat, with many more joining online. The event highlighted the importance of linking technical expertise with policy to ensure AI delivers benefits responsibly, drawing on the OECD AI Principles and Policy Observatory. Speakers from ASEAN, Thailand, and Singapore shared progress on implementing their national AI strategies,

AI’s potential came into focus through two powerful examples. In agriculture, it can speed up crop breeding and enable precision farming, while in disaster management, digital twins built from satellite and telecoms data can strengthen early warnings and damage assessments. As climate change intensifies agricultural stress and natural hazards, these cases demonstrate how AI can deliver real societal benefits—while underscoring the need for robust governance and regional cooperation, supported by OECD initiatives such as the upcoming AI Policy Toolkit.

The OECD presented activities in international AI governance, including the AI Policy Observatory, the AI Incidents and Hazards Monitor and the Reporting Framework for the G7 Hiroshima Code of Conduct for Developers of Advanced AI systems.

Bangkok co-creation workshop: testing the OECD AI Policy Toolkit

Following the Tokyo roundtable, the OECD, supported by the Foreign Ministries of Japan and Thailand, hosted the first co-creation workshop for the OECD AI Policy Toolkit on 6 August 2025 in Bangkok. Twenty senior policymakers and AI experts from across the region contributed regional perspectives to shape the tool, which will feature a self-assessment module to identify priorities and gaps, alongside a repository of proven policies and practices. The initiative, led by Costa Rica as Chair of the 2025 OECD Ministerial Council Meeting, has already gained strong backing from governments and organisations, including Japan’s Ministry of Internal Affairs and Communications.

Hands-on discussion on key challenges and practical solutions

The co-creation workshop provided a space for participants to work in breakout groups and discuss concrete challenges, explore practical solutions and design effective AI policies in key domains.

Participants identified several pressing challenges for AI governance in Southeast Asia. Designing public funding programmes for AI research and development remains difficult in an environment where technology evolves faster than policy cycles, while the need for large-scale investment continues to grow.

The scarcity of high-quality, local-language data, weak governance frameworks, limited data-sharing mechanisms,  and reliance on foreign compute providers further constrain progress, alongside the shortage of locally developed AI models tailored to sectoral needs.  

Participants also focused on labour market transformation, digital divides, and the need to advance AI literacy across all levels of society – from citizens to policymakers – to foster awareness of both opportunities and risks.

Participants showcased promising national initiatives, from responsible data-sharing frameworks and investment incentives for data centres and venture capital, to sectoral data platforms and local-language large language models. Countries are also rolling out capacity-building programmes to strengthen AI adoption and oversight, while seeking the right balance between regulation and innovation to foster trustworthy AI.

Across groups, participants agreed on the need to strengthen engagement, foster collaboration, and create enabling conditions for the practical deployment of AI, capacity building, and knowledge sharing.

The instruments discussed during the workshop will feed into the Toolkit’s policy repository, enabling other countries to draw on these experiences and adapt them to their national contexts.

Taking AI governance from global guidance to local practice

The Tokyo roundtable and Bangkok workshop were key milestones in building a regional dialogue on AI governance with Southeast Asian countries. By combining policy frameworks with technical demonstrations, the discussions focused on turning international guidance into practical, locally tailored measures. Southeast Asia is already shaping how global principles translate into action, and with continued collaboration, the OECD AI Policy Toolkit will provide governments in the region—and beyond—with concrete tools to design and implement trustworthy AI.

The authors would like to thank the team members who contributed to the success of these projects: Hugo Lavigne, Luis Aranda, Lucia Russo, Celine Caira, Kasumi Sugimoto, Julia Carro, Nikolas Schmidt and Takako Kitahara.

The post A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action appeared first on OECD.AI.



Source link

Continue Reading

Ethics & Policy

Elon Musk predicts big on AI: ‘AI could be smarter than the sum of all humans soon…’ |

Published

on


Elon Musk has issued a striking prediction about the future of artificial intelligence, suggesting that AI could surpass any single human in intelligence as soon as next year (2026) and potentially become smarter than all humans combined by 2030. He made the comment during a recent appearance on the All-In podcast, emphasising both his optimism about AI’s capabilities and his longstanding concern over its rapid development. While experts continue to debate the exact timelines for achieving human-level AI, Musk’s projections underscore the accelerating pace of AI advancement. His remarks also highlight the urgent need for global discussions on the societal, technological, and ethical implications of increasingly powerful AI systems.

Possible reasons behind Elon Musk’s prediction

Although Musk did not specify all motivations, several factors could explain why he made this statement:

  • Technological trajectory: Rapid advancements in AI, especially in agentic AI, physical AI, and sovereign AI highlighted in Deloitte’s 2025 trends report, indicate a fast-moving technological frontier.
  • xAI positioning: As CEO of xAI, Musk may be aiming to draw attention to AI’s capabilities and the company’s role in accelerating scientific discovery.
  • Historical pattern: Musk has a history of bold predictions, such as suggesting in 2020 that AI might overtake humans by 2025, reflecting his tendency to stimulate public discourse and action.
  • Speculative assessment: Musk’s comments may reflect his personal belief in the exponential growth of AI, fueled by increasing computational power, data availability, and breakthroughs in machine learning.

Expert context: human-level AI

While Musk’s timeline is aggressive, it aligns loosely with broader expert speculation on human-level machine intelligence (HLMI). A 2017 MIT study estimated a 50% chance of HLMI within 45 years and a 10% chance within 9 years, suggesting that Musk’s forecast is on the optimistic, yet not implausible, side of expert predictions.Additional surveys support this spectrum of expert opinion. The 2016 Survey of AI Experts by Müller and Bostrom, which included 550 AI researchers, similarly found a 50% probability of HLMI within 45 years and a 10% probability within 9 years. More recent AI Impacts Surveys (2016, 2022, 2023) indicate growing optimism, with the 2023 survey of 2,778 AI researchers estimating a 50% chance of HLMI by 2047, reflecting technological breakthroughs like large language models. These findings show that while Musk’s timeline is aggressive, it remains within the broader range of expert projections.

Potential dangers and opposition views

Musk’s prediction has sparked debate among experts and the public:

  • Job losses: AI capable of outperforming humans in most tasks could disrupt labor markets, potentially replacing millions of jobs in sectors from manufacturing to knowledge work.
  • Concentration of power: Superintelligent AI could concentrate power among a few corporations or governments, raising fears about inequality and control.
  • Safety concerns: Critics warn of unintended consequences if AI surpasses human intelligence without robust safety measures, including errors in decision-making or misuse in military or financial systems.
  • Ethical dilemmas: Questions about accountability, transparency, and moral responsibility for AI decisions remain unresolved, fueling ongoing debates about regulation and ethical frameworks.

Societal and generational implications

The rise of superintelligent AI may profoundly affect society:

  • Education and skill shifts: New generations may need entirely different skill sets, emphasizing creativity, critical thinking, and AI oversight rather than routine work.
  • Economic transformation: Industries could see unprecedented efficiency gains, but also significant displacement, requiring proactive retraining programs and social policies.
  • Human identity and purpose: As AI surpasses human capabilities in more domains, society may face philosophical questions about work, creativity, and the role of humans in a highly automated world.

The AI ethics debate

Musk’s comments feed into the broader discussion on AI ethics, highlighting the need for responsible AI deployment. Experts stress balancing innovation with safeguards, including:

  • Ethical AI design and transparency
  • Policy and regulatory frameworks for high-risk AI systems
  • Global cooperation to prevent misuse or unintended consequences
  • Public awareness and discourse on AI’s societal impacts





Source link

Continue Reading

Ethics & Policy

Fairfield University awarded three-year grant to lead AI ethics in education collaborative research project — EdTech Innovation Hub

Published

on


The project aims to “serve the national interest by enhancing AI education through the integration of ethical considerations in AI curricula, fostering design and development of responsible and secure AI systems”, according to the project summary approved by the National Science Foundation.

Aiming the improve the effectiveness of AI ethics education for computer science students, the research will develop an “innovative pedagogical strategy.

Funding has also been awarded to partnering institutions, bringing the total support to nearly $400,000.

Sidike Paheding, PhD, Chair and Associate Professor of Computer Science at Fairfield University’s School of Engineering and Computing, is the principal investigator of the project. 

“Throughout this project, we aim to advance AI education and promote responsible AI development and use. By enhancing AI education, we seek to foster safer, more secure, and trustworthy AI technologies,” Dr. Paheding comments.

“Among its many challenges, AI poses significant ethical issues that need to be addressed in computer science courses. This project will give faculty practical, hands-on teaching tools to explore these issues with their students.”

The ETIH Innovation Awards 2026



Source link

Continue Reading

Trending