Tools & Platforms
Will China’s AI boom rewrite the global order?

Chinese AI models are closing the gap with Western models, with a growing number of companies and organizations now integrating them into their operations. This poses not only a competitive threat to American and other Western firms, but also a geopolitical risk: if this trend gains global traction, it could give the Chinese Communist Party a conduit for spreading propaganda, disseminating fake news, and siphoning sensitive information worldwide.
China has already proven it can compete with the United States in AI, despite operating under restrictions imposed by the U.S. government’s semiconductor export controls. In January, DeepSeek unveiled its R1 model, which delivered performance comparable to models from OpenAI or Google but, according to the company, was trained at a fraction of the cost and computing power, and requires much less computing power to run. Since then, other Chinese AI companies, including Alibaba, Huawei, and Baidu, have launched models with similar characteristics.
These models are now gaining traction in a field that, until recently, was dominated almost exclusively by Western, particularly American, companies. According to the Wall Street Journal, international banks, public universities, energy companies, and other businesses, mainly in Europe, the Middle East, Africa, and Asia, are adopting models from DeepSeek or Alibaba as alternatives to American ones. This is partly due to lower operating costs, which offset the modest gap in capabilities. HSBC, the world’s fourth-largest bank, has begun internal tests of DeepSeek’s models, as has Britain’s Standard Chartered. Saudi Aramco, the world’s largest oil company, recently installed DeepSeek in its main data center. In Japan, AI startup Abeja chose to develop custom models for the Ministry of Economy, Trade and Industry using China’s Qwen models instead of those from Meta or Google.
OpenAI remains the dominant player in the field, with ChatGPT’s app reaching 910 million downloads, according to Sensor Tower. But DeepSeek is emerging as a serious rival, with its app already boasting 125 million downloads. On Latenode, a platform that helps organizations build custom AI tools, at least 20% of users now choose DeepSeek models.
One key reason for the success of Chinese models lies in their strategic priorities. American AI companies invest heavily in breakthroughs and the pursuit of artificial general intelligence. Chinese companies, by contrast, focus on developing practical, immediately useful applications designed to attract new users. This may be less ambitious than building superintelligence, but it is more relevant for businesses today. Chinese firms also benefit from releasing many models as open source, giving users greater control over implementation and customization.
The immediate casualties of this trend are American AI companies, which spend vast sums developing advanced models and rely on large business contracts to generate essential revenue. OpenAI, for example, has been expanding its operations abroad, opening offices in Europe and Asia this year. Unsurprisingly, American executives are among the first to raise alarms about the global spread of Chinese models. ““The No. 1 factor that will define whether the U.S. or China wins this race is whose technology is most broadly adopted in the rest of the world,” Microsoft President Brad Smith told the U.S. Senate recently. “Whoever gets there first will be difficult to supplant.”
Chinese dominance in AI could have far-reaching consequences. DeepSeek’s R1 model, for instance, censors content inconvenient to the Communist Party, refusing to answer questions about events such as the Tiananmen Square massacre and asserting that Taiwan is part of China. Modern AI chatbots are already emerging as replacements for traditional search engines like Google. When such bots are tightly regulated by Beijing, they can easily become channels for spreading propaganda and misinformation in the confident, authoritative tone typical of generative AI.
At the same time, these models can serve as powerful surveillance tools. Chatbots can build detailed personal profiles of users. If Chinese companies grant government access to this data, Beijing could conduct sweeping surveillance of individuals worldwide, including those in sensitive positions, mapping their identities, roles, and potential vulnerabilities, and helping to identify people who might be susceptible to pressure or blackmail.
On a geopolitical level, much of America’s global power today rests on the fact that the technological products and services the world uses are largely developed, manufactured, and operated by U.S. firms. If China wins the current AI race, the United States could lose a significant share of that leverage, while Beijing gains it.
Tools & Platforms
AI home appliances: new normal at IFA 2025 – 조선일보
Tools & Platforms
How can AI enhance healthcare access and efficiency in Thailand?

Support accessible and equitable healthcare
Julia continued by noting that Thailand has been praised for its efforts in medical technology, ranking as a leader or second in ASEAN.
However, she acknowledged the limitations of medical technology development, not just in Thailand, but across the region, particularly regarding the resources and budgets required, as well as regulations in each country.
“Medical technology” will be one of the driving forces of Thailand’s economy, contributing to the enhancement of healthcare services to international standards, increasing competitiveness in the global market, and promoting equitable access to healthcare.
It will also encourage the development of the medical equipment industry to become more self-reliant, reducing dependency on imports, and generating new opportunities through health tech startups.
Julia further explained that Philips has supported Thailand’s medical technology sector from the past to the present, working towards improving access to healthcare and ensuring equity for all.
Examples include donations of 100 patient monitoring devices worth around 3 million baht to the Ministry of Public Health to assist hospitals affected by the 2011 floods, as well as providing ultrasound echo machines to various hospitals in collaboration with the Heart Association of Thailand to support mobile healthcare units in rural areas.
“Access to healthcare services is a major challenge faced by many countries, especially within local communities. Thailand must work to integrate medical services effectively,” she said.
“Philips has provided medical technology in various hospitals, both public and private, as well as in medical schools. Our focus is on medical tools for treating diseases such as stroke, heart disease, and lung diseases, which are prevalent among many patients.”
AI enhances predictive healthcare solutions
Thailand has been placed on the “shortlist” of countries set to launch Philips’ new products soon after their global release. However, the product launch in Thailand will depend on meeting regulatory requirements, safety standards, and relevant policies for registration.
Julia noted that economic crises, conflicts, or changes in US tariff rates may not significantly impact the importation of medical equipment.
Philips’ direction will continue to focus on connected healthcare solutions, leveraging AI technology for processing and predictive analytics. This allows for early predictions of patient conditions and provides advance warnings to healthcare professionals or caregivers.
Additionally, Philips places significant emphasis on AI research, particularly in the area of heart disease. The company collaborates with innovations in image-guided therapy to connect all devices and patient data for heart disease patients.
This enables doctors and nurses to monitor patient conditions remotely, whether they are in another room within the hospital or outside of it, ensuring accurate diagnosis, treatment, and more efficient patient monitoring.
“Connected care”: seamless healthcare integration
“Connected care” is a solution that supports continuous care by connecting patient information from the moment they arrive at the hospital or emergency department, through surgery, the ICU, general wards, and post-discharge recovery at home.
In Thailand, Philips’ HPM and connected care systems are widely used, particularly in large hospitals and medical schools.
The solution is based on three key principles:
- Seamless: Patient data is continuously linked, from the operating room to the ICU and general wards, without interruption. This differs from traditional systems, where information is often lost in between stages.
- Connected: Medical devices at the bedside, such as drug dispensers, saline drips, and laboratory data, are connected to monitors, providing doctors with an immediate overview of the patient’s condition.
- Interoperable: Patient data can be transferred to all departments, enabling doctors to track test results and view patient information anywhere, at any time. This reduces redundant tasks and increases the time available for direct patient care.
Tools & Platforms
Bridging the AI Regulatory Gap Through Product Liability

Scholar proposes applying product liability principles to strengthen AI regulation.
In a world where artificial intelligence (AI) is evolving at an exponential pace, its presence steadily reshapes relationships and risks. Although some actors can abuse AI technology to harm others, other AI technologies can cause harm without malicious human intent involved. Individuals have reported forming deep emotional attachments to AI chatbots, sometimes perceiving them as real-life partners. Other chatbots have deviated from their intended purpose in harmful ways—such as a mental health chatbot that, rather than providing emotional support, inadvertently prescribed diet advice.
Despite growing public concern over the safety of AI systems, there is still no global consensus on the best approach to regulate AI.
In a recent article, Catherine M. Sharkey of the New York University School of Law argues that AI regulation should be informed by the government’s own experiences with AI technologies. She explores how lessons from the approach of the Food and Drug Administration (FDA) to approving high-risk medical products, such as AI-driven medical devices that interpret medical scans or diagnose conditions, can help shape AI regulation as a whole.
Traditionally, FDA requires that manufacturers demonstrate the safety and effectiveness of their products before they can enter the market. But as Sharkey explains, this model has proven difficult to apply to adaptive AI technologies that can evolve after deployment—since, under traditional frameworks, each modification would otherwise require a separate marketing submission, an approach ill-suited to systems that continuously learn and change. To ease regulatory hurdles for developers, particularly those whose products update frequently, FDA is moving toward a more flexible framework that relies on post-market surveillance. Sharkey highlights the role of product liability law, a framework traditionally applied to defective physical goods, in addressing accountability where static regulations fail to manage the risks that emerge once AI systems are in use.
FDA has been at the vanguard of efforts to revise its regulatory framework to fit adaptative AI technologies. Sharkey highlights that FDA shifted from a model emphasizing pre-market approval, where products must meet safety and effectiveness standards before entering the market, to one centered on post-market surveillance, which monitors products’ performance and risks after AI medical products are deployed. As this approach evolves, she explains that product liability serves as a crucial deterrent against negligence and harm, particularly during the transition period before a new regulatory framework is established.
Critics argue that regulating AI requires a distinct approach, as no prior technological shift has been as disruptive. Sharkey contends that these critics overlook the strength of existing liability frameworks and their ability to adapt to AI’s evolving nature.
Sharkey argues that crafting pre-market regulations for new technologies can be particularly difficult due to uncertainties about risks.
Further, she notes that regulating emerging technology too early could stifle innovation. Sharkey argues that product liability offers a dynamic alternative because, instead of requiring regulators to predict and prevent every possible AI risk in advance, it allows agencies to identify failures as they occur and adjust regulatory strategies accordingly.
Sharkey emphasizes that FDA’s experience with AI-enabled medical devices serves as a meaningful starting point for developing a product liability framework for AI. In developing such framework, she draws parallels to the pharmaceutical’s drug approval process. When a new drug is introduced to the market, its full risks and benefits remain uncertain. She explains that both manufacturers and FDA gather extensive real-world data after a product is deployed. In light of that process, she proposes that the regulatory framework should be adjusted to ensure that manufacturers either return to FDA with updated information, or that tort lawsuits serve as a corrective mechanism. In this way, product liability has an “information-forcing” function, ensuring that manufacturers remain accountable for risks that surface post-approval.
As Sharkey explains, the U.S. Supreme Court’s decision in Riegel v. Medtronic set an important precedent for the intersection of regulation and product liability. The Court ruled that most product liability claims related to high-risk medical devices approved through FDA’s pre-market approval process—a rigorous review that assesses the device’s safety and effectiveness—are preempted. This means that manufacturers are shielded from state-law liability if their devices meet FDA’s safety and effectiveness standards. In contrast, Sharkey explains that under Riegel, devices cleared under FDA’s pre-market notification process do not receive the same immunity, because that pathway does not involve a full safety efficacy review but instead allows devices to enter the market if they are deemed “substantially equivalent” to existing ones.
Building on Riegel, Sharkey proposes a model in which courts assess whether a product liability claim raises new risk information that was not considered by FDA in its original risk-benefit analysis at the time of approval. Under this framework, if the claim introduces evidence of risks beyond those previously weighted by the agency, the product liability lawsuit should be allowed to proceed.
Sharkey concludes that the rapid evolution of AI technologies and the difficulty of predicting their risks make crafting comprehensive regulations at the pre-market stage particularly challenging. In this context, she asserts that product liability law becomes essential, serving both as a deterrent and an information-forcing tool. Sharkey’s model presents a promise to address AI harms in a way that accommodates the adaptive nature of machine learning systems, as illustrated by FDA’s experience with AI-enabled technologies. Instead of creating rules in a vacuum, she argues that regulators could benefit from the feedback loop between tort liability and regulation, which allows for some experimentation of standards before the regulator commits to a formal pre-market rule.
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions