Connect with us

Tools & Platforms

China-proposed global AI cooperation organization expected to narrow global tech gap-Xinhua

Published

on


A visitor interacts with a humanoid robot at the exhibition area of the 2025 World AI Conference and High-Level Meeting on Global AI Governance in east China’s Shanghai, July 26, 2025. (Xinhua/Chen Haoming)

China’s latest proposal is more than a call for cooperation — it is a strategic move to shape how AI is regulated, applied and understood globally.

by Maya Majueran

Building on President Xi Jinping’s 2023 proposal for the Global Artificial Intelligence (AI) Governance Initiative, the Chinese government has now called for the creation of a global AI cooperation organization.

In July, Chinese Premier Li Qiang announced the proposal during his address at the opening ceremony of the 2025 World AI Conference and High-Level Meeting on Global AI Governance, a three-day event held in Shanghai.

China’s call comes amid intensifying competition among major global countries at a time when efforts to regulate AI remain fragmented. Despite mounting geopolitical tensions, there is a shared international interest in addressing risks posed by AI, including machine hallucinations, deepfakes and unchecked proliferation.

An urgent need exists to build a consensus on how to strike a sustainable balance between technological advancement and security. As AI becomes embedded in every aspect of daily life, including public services, healthcare, finance and national defence, societies face the dual challenge of fostering innovation while managing complex risks.

Addressing these challenges requires inclusive dialogue among governments, industry leaders, researchers and civil society. The goal must be to ensure that AI develops responsibly, ethically and in alignment with the public interest and global stability. Much like the global financial architecture that has long been dominated by Western hegemony, there is increasing recognition that AI governance should follow a more multipolar trajectory. A balanced approach is crucial to prevent any single bloc from unilaterally shaping the future of this transformative technology.

China envisions the proposed organization as a comprehensive, inclusive platform for international AI cooperation. It aims to foster broad participation that reflects the diverse priorities of countries across the globe. A key objective is to address the growing “AI divide” — the technological gap between advanced economies and developing countries. Without coordinated action, the objective will be further marginalized in the accelerating global AI race, deepening existing inequalities.

The initiative emphasizes pragmatic, action-oriented collaboration to translate shared objectives into tangible outcomes. China seeks to unite countries to promote innovation, share technological expertise, and coordinate AI-related policies in a spirit of mutual benefit.

The proposed organization would also work to unlock the transformative potential of AI across sectors such as healthcare, education, agriculture, and industry. China hopes this will catalyse more equitable global development, fostering inclusive growth, shared prosperity and stability in an increasingly interconnected digital world.

Robot GEAIR carries on hybrid pollination work at a greenhouse in Beijing, capital of China, Nov. 29, 2024. (IGDB of the CAS/Handout via Xinhua)

The fragmentation in global AI governance stems in large part from the dominance of a few powerful countries pursuing narrow national interests. For decades, the West has disproportionately benefited from technological progress. However, the emergence of China, India, Singapore and other innovation-driven Global South countries is beginning to challenge this status quo.

Now is the time for the international community to align efforts toward establishing a robust, consensus-based framework for global AI governance, one that equitably serves the interests of all countries.

China is taking the lead by inviting interested countries to participate in shaping the proposed organization’s structure and agenda. It is reaffirming its commitment to advancing both multilateral and bilateral cooperation — a strategic yet inclusive approach.

China’s proposal signals a shift from passive participation to active leadership in global AI rulemaking. Its growing confidence in its AI capabilities, including large language models, facial recognition and industrial applications, positions it as a credible leader in this arena.

China is also offering to share its technologies, resources, and insights with the international community. This includes providing training, infrastructure and technology transfer to support other countries. By doing so, China positions itself as a partner in equitable development and a counterbalance to Western dominance in AI.

Its support for open-source development reflects a commitment to shared growth over control or profit, signaling a willingness to empower other countries, particularly in the Global South, through collaborative innovation.

China has consistently promoted international cooperation in both software and hardware technologies, recognizing that addressing global AI challenges requires collective action. Through joint research, technical partnerships and knowledge exchange, China aims to democratize access to advanced tools, frameworks and platforms.

This strategy aligns with China’s broader vision of inclusive technological growth. It emphasizes key principles such as “AI for good,” fairness, respect for national sovereignty, and the development of non-discriminatory global standards.

Facilitating cross-border research collaboration is another major goal. By undertaking such efforts, China aims to reshape its image from a strategic rival to a constructive global partner.

Yet a key question remains: can a truly inclusive AI governance framework be built in a deeply divided geopolitical landscape? Like it or not, China’s approach, especially its willingness to share knowledge and promote open-source collaboration, is gaining traction, particularly among Global South countries. These countries increasingly view China as a transparent and reliable partner, in contrast to traditional Western frameworks that often come with geopolitical conditions.

China believes that by providing access to advanced AI tools, it can forge stronger political and economic ties through technology-driven diplomacy. Western powers, by contrast, tend to restrict AI access to preserve their strategic advantage and profit through technological concentration. 

A visitor interacts with a robot equipped with intelligent dexterous hands at the 2025 World AI Conference (WAIC) in east China’s Shanghai, July 29, 2025. (Xinhua/Fang Zhe)

As the AI arms race accelerates, the architecture of global technology governance is undergoing a profound transformation. China’s latest proposal is more than a call for cooperation — it is a strategic move to shape how AI is regulated, applied and understood globally.

Ultimately, the success of this initiative will hinge on its reception, particularly among Global South countries. These countries will play a decisive role in determining whether a truly multipolar AI governance structure emerges or whether current Western-led frameworks continue to prevail.

Editor’s note: Maya Majueran currently serves as the director of Belt & Road Initiative Sri Lanka, an independent and pioneering organization with strong expertise in Belt and Road Initiative advice and support.

The views expressed in this article are those of the author and do not necessarily reflect the positions of Xinhua News Agency.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI algorithms can detect vision problems years before they actually appear, says ZEISS India

Published

on


Artificial intelligence (AI) algorithms and other deep-technologies can help detect vision problems years before even any traces of their symptoms appear and therefore future of eye care and maintaining good eyesight would significantly rely on predictive and preventive innovations driven by robotics, GenAI and deep-tech, said ZEISS India, a subsidiary Carl Zeiss AG, the German optics, opto-electronics, and medical technology company.

Traditionally, eye scans relied heavily on human analysis and significant efforts required to analyse huge volumes of data. “However, AI proposes to aid clinical community with its ability to analyse huge volumes of data with high accuracy and helps detect anomalies at early stages of disease onset and thereby solving one of the biggest challenges in eye care, late detention, seen in emerging economies, including India,” Dipu Bose, Head, Medical Technology, ZEISS India and Neighbouring Markets told The Hindu.

For example, he said, conditions like diabetic retinopathy, glaucoma, or macular degeneration often begin with subtle changes in the retina. AI would be able to catch early indicators (even traces of these) years before the patients become aware of having any symptoms and take timely action to prevent irreversible blindness.

According to Mr. Bose, AI, as a well-trained partner, would be able to analyse thousands of eye images in seconds, with high degree of accuracy. It learns patterns by analysing massive datasets of eye scans and medical records, and it becomes smart enough to spot the tiniest changes/things that the human eye might miss.

Future innovation would rely significantly on predictive and preventive innovations for eye care, where technology would play an essential role in formulating solutions that would allow for earlier detection, more accurate diagnoses, and tailored treatments, he forcast adding Indian eyecare professionals were increasingly adopting new age technologies to ensure better patient outcomes. As a result, AI, Gen AI, robotics and deeptech were causing a significant shift in clinical outcomes, he observed.

“This is precisely why we call it preventive blindness. In India, this is becoming increasingly relevant as the majority of the population do not go for regular eye check-ups and they visit an eye doctor only when their vision is already affected,” Mr. Bose said.

Early intervention would lead to better outcomes: reduce inefficiencies and reduced healthcare costs, he said. “ZEISS contributes to this by advancing medical technologies for diagnosis, surgical interventions, and visualization, ultimately improving patient outcomes and quality of life,” he claimed.

For instance, ZEISS Surgery Optimiser App, an AI-powered tool that allows young surgeons to learn from uploaded and segmented surgery videos of experienced cataract surgeons. Similarly, in diagnostics, ZEISS is also leveraging AI through the Pathfinder solution, an integrated deep learning and AI-based support tool. These technologies can support eye care professionals in making data-driven decisions by visualising and analysing clinical workflows. They leverage real-time surgical data to help young clinicians identify variations, optimise surgical steps, and improve procedural consistency.

“These insight-driven technologies are expected to help bridge experience gaps, improve surgical confidence, and ultimately enhance patient outcomes across the country,” Mr. Bose anticipated.

However, he added, tackling unmet needs and ensuring early diagnosis of diseases would require a fundamental shift: from reactive care to proactive and precision-driven eye-care. “This means leveraging technology not just to treat but to predict, prevent, and personalise patient care before even the symptoms of the disease show up,” he further said.

The eye-tech market is growing in India. The ophthalmic devices market was $943.8 million in 2024 and is expected to reach $1.54 billion by 2033, growing at 5.23% CAGR. The global eye-tech market was valued at approximately $74.67 billion in 2024 and is projected to reach $110.33 billion by 2030 at a CAGR of 6.9%.

Published – September 06, 2025 11:21 am IST



Source link

Continue Reading

Tools & Platforms

AI and cybersecurity: India’s chance to set a responsible global digital standard

Published

on


India’s digital economy is experiencing extraordinary growth, driven by government initiatives, private enterprise, and widespread technological adoption across users from diverse socio-economic backgrounds. Artificial intelligence (AI) is now woven into the fabric of organisational operations, shaping customer interactions, streamlining product development, and enhancing overall agility. Yet, as digitisation accelerates, the nation’s cyber risk landscape is also expanding—fuelled by the very AI innovations that are transforming business.

In a rapidly evolving threat landscape, human error remains a persistent vulnerability. A recent cybersecurity survey revealed that 65% of enterprises worldwide now consider AI-powered email phishing the most urgent risk they face. India’s rapidly growing digital user base and surging data volumes create an environment for increased risks.

Yet, there’s a strong opportunity for India to leverage its unique technical strengths to lead global conversations on secure, ethical, and inclusive digital innovation. By championing responsible AI and cybersecurity, the country can establish itself not only as a global leader but also as a trusted hub for safe digital solutions.

The case for a risk-aware, innovation-led approach

While AI is strengthening security measures with rapid anomaly detection, automated responses, and cost-efficient scalability, these same advancements are also enabling attackers to move faster and deploy increasingly sophisticated techniques to evade defences. The survey shows that 31% of organisations that experienced a breach faced another within three years, underscoring the need for ongoing, data-driven vigilance.

Globally, regulators are deliberating on ensuring greater AI accountability, frameworks with tiered risk assessments, data traceability, and demands for transparent decision-making, as seen in the EU AI Act, the National Institute of Standards and Technology’s AI Risk Management Framework in the US, and the Ministry of Electronics and Information Technology’s AI governance guidelines in India.

India’s digital policy regime is evolving with the enactment of the Digital Personal Data Protection Act and other reforms. Its globally renowned IT services sector, increasing cloud adoption, and digital solutions at population scale are use cases for nations to leapfrog in their digital transformation journey. However, there is a continued need for collaboration for consistent standards, regulatory frameworks, and legislation. This approach can empower Indian developers as they build innovative and compliant solutions with the agility to serve Indian and global markets.

Smart AI security: growing fast, staying steady

The survey highlights that more than 90% of surveyed enterprises are actively adopting secure AI solutions, underscoring the high value organisations place on AI-driven threat detection. As Indian companies expand their digital capabilities with significant investments, security operations are expected to scale efficiently. Here, AI emerges as an essential ally, streamlining security centres’ operations, accelerating response time, and continuously monitoring hybrid cloud environments for unusual patterns in real time.

Boardroom alignment and cross-sector collaboration

One encouraging trend is the increasing involvement of executive leadership in cybersecurity. More boards are forming dedicated cyber-risk subcommittees and embedding risk discussions into broader strategic conversations. In India too, this shift is gaining momentum as regulatory expectations rise and digital maturity improves.

With the lines between IT, business, and compliance blurring, collaborative governance is becoming essential. The report states that 58% of organisations view AI implementation as a shared responsibility between executive leadership, privacy, compliance, and technology teams. This model, if institutionalised across Indian industry, could ensure AI and cybersecurity decisions are inclusive, ethical, and transparent.

Moreover, public-private partnerships — especially in areas like cyber awareness, standards development, and response coordination — can play a pivotal role. The Indian Computer Emergency Response Team (CERT-In), a national nodal agency with the mission to enhance India’s cybersecurity resilience by providing proactive threat intelligence, incident response, and public awareness, has already established itself as a reliable incident response authority.

A global opportunity for India

In many ways, the current moment represents a calling to create the conditions and the infrastructure to lead securely in the digital era. By leveraging its vast resource of engineering talent, proven capabilities in scalable digital infrastructure, and a culture of economical innovation, India can not only safeguard its own digital future but also help shape global norms for ethical AI deployment. This is India’s moment to lead — not just in technology, but in trust.

This article is authored by Saugat Sindhu, Global Head – Advisory Services, Cybersecurity & Risk Services, Wipro Limited.

Disclaimer: The views expressed in this article are those of the author/authors and do not necessarily reflect the views of ET Edge Insights, its management, or its members



Source link

Continue Reading

Tools & Platforms

Nvidia says GAIN AI Act would restrict competition, likens it to AI Diffusion Rule

Published

on


If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps [File]
| Photo Credit: REUTERS

Nvidia said on Friday the AI GAIN Act would restrict global competition for advanced chips, with similar effects on the U.S. leadership and economy as the AI Diffusion Rule, which put limits on the computing power countries could have.

Short for Guaranteeing Access and Innovation for National Artificial Intelligence Act, the GAIN AI Act was introduced as part of the National Defense Authorization Act and stipulates that AI chipmakers prioritize domestic orders for advanced processors before supplying them to foreign customers.

“We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips,” an Nvidia spokesperson said.

If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps.

“It should be the policy of the United States and the Department of Commerce to deny licenses for the export of the most powerful AI chips, including such chips with total processing power of 4,800 or above and to restrict the export of advanced artificial intelligence chips to foreign entities so long as United States entities are waiting and unable to acquire those same chips,” the legislation reads.

The rules mirror some conditions under former U.S. President Joe Biden’s AI diffusion rule, which allocated certain levels of computing power to allies and other countries.

The AI Diffusion Rule and AI GAIN Act are attempts by Washington to prioritise American needs, ensuring domestic firms gain access to advanced chips while limiting China’s ability to obtain high-end tech amid fears that the country would use AI capabilities to supercharge its military.

Last month, U.S. President Donald Trump made an unprecedented deal with Nvidia to give the government a cut of its sales in exchange for resuming exports of banned AI chips to China.



Source link

Continue Reading

Trending