As concerns over the risks of AI grow, strengthening governance emerges as a way for Hong Kong to harness the technology safely. Experts call for a unified regulatory standard with clear risk classification, robust privacy protection, and transparency, backed by a dedicated AI oversight body, to position the city as a leader in responsible AI development. Oswald Chan reports from Hong Kong.
As more people adopt artificial intelligence at work and for personal use, the technology is fast becoming as essential and ubiquitous as electricity and running water.
Many believe that using AI regularly would mean the pros outweighing the cons.
The University of Melbourne and global professional services firm KPMG surveyed 48,000 people in 47 countries between November 2024 and January this year, and found that they used the technology complacently.
According to the poll, 66 percent of the respondents relied on AI output without evaluating its accuracy, while 56 percent made mistakes at work using AI. Almost half of them uploaded sensitive company information onto public AI tools like ChatGPT, creating complex risks for organizations.
The study also showed that although 66 percent of those surveyed were already using AI — some regularly — just 46 percent are willing to trust AI systems, reflecting apprehension over its obvious benefits and perceived risks. Such thinking should be addressed with a sound AI corporate governance framework, or else people could lose faith in this burgeoning technology. Without a sound governance method, it could mean loss of trust and reputation, stifled innovation, impeded investment, and a talent drain.
“Good AI governance at the corporate level makes Hong Kong companies more competitive. At government level, good AI governance offers certainty, attracts market investment, boosts the city’s global reputation as a reliable AI hub, and ensures that society is ready for collective AI competitiveness,” says Roman Fan Wei, managing partner of the Deloitte China AI Institute.
The Hong Kong Special Administrative Region’s common law system, expertise in financial regulation and abundant cross-border knowledge exchange channels are the strengths it can leverage to buttress AI governance. Hong Kong can make up ground by introducing a comprehensive development blueprint, bringing adequate technical expertise in the field into regulatory bodies, strengthening public engagement in discussions about its social and ethical aspects, and creating a cross-border AI governance framework within the 11-city Guangdong-Hong Kong-Macao Greater Bay Area.
Multifaceted strategy
Legal experts and business advisory leaders say Hong Kong must improve in four aspects — regulation, risk classification, privacy protection, and transparency.
At present, the SAR has no dedicated legislation on AI, with existing laws and regulations — particularly those relating to data protection, intellectual property, anti-discrimination and cybersecurity — applying to AI applications by default.
There are also concerns about whether Hong Kong’s existing intellectual property laws adequately regulate AI usage, particularly regarding copyright protection for AI-generated works.
Nick Chan Hiu-fung, a partner at international law firm Squire Patton Boggs and a Hong Kong deputy to the National People’s Congress, suggests that Hong Kong should mull a stand-alone AI ordinance based on the principles of accountability, traceability, fairness, ethical practice, privacy, safety and human oversight.
“On one hand, the proposed AI legislation should be people-centric to encourage AI development by reducing illusions and biases created by AI. At the other end, such legislation should not restrain development of the AI industry and technology,” he tells China Daily.
In addition, Peter Kwon Chan-doo, a Hong Kong-based partner at global law firm RPC, says it is important for regulators to ensure that guidelines and legislation are consistent to help enterprises conduct business planning and budgeting in AI deployment.
There is no dedicated government department in Hong Kong overseeing AI governance. Instead, the regulatory framework is embedded in sector-specific regulations enforced by multiple bodies, including the Digital Policy Office, the Office of the Privacy Commissioner for Personal Data, the Hong Kong Monetary Authority, and the Securities and Futures Commission. Effective governance requires coordination among these agencies to prevent regulatory gaps.
“As AI technology is complex and evolving rapidly, having a dedicated AI regulatory body, comprising government regulators, industry leaders, academic researchers, cybersecurity professionals, public representatives and legal experts is absolutely important for strengthening international alignment and the city’s status as a global AI hub,” says Fan.
Deloitte China Trustworthy AI Partner Silas Zhu Hao believes that forming systems and organizational structures should be the priority of the proposed AI governance body. “It should provide a reliable framework to help organizations set up AI regulatory principles by customizing those precepts into detailed procedures, processes or interpretations to formulate a risk-based approach for AI governance,” Zhu says.
Chan also advocates for creating an AI governance body to oversee the technology’s development and use, as AI companies would benefit from a unified regulatory framework for governance. “The administration can consider establishing an AI policy office within its organizational structure as a new AI policy mechanism. Hong Kong can consider setting up an AI court to deal with legal cases involving AI technology,” Chan says.
The Commerce and Economic Development Bureau and the Intellectual Property Department have proposed introducing a copyright infringement exception in the Copyright Ordinance, allowing reasonable use of copyrighted works for text and data mining, and computational data analysis and processing, for the training of AI models.
Chan says he backs refining the Copyright Ordinance to encourage international AI companies to settle in Hong Kong, and local AI companies to expand globally.
As for classifying risks, Hong Kong adopts a risk-based approach in AI governance by requiring that the types and extent of risk mitigation measures are proportionate to the risk levels.
With the unlimited potential upside and downside of AI, Zhu emphasizes the need for adequate risk controls. “With limited resources, it requires careful allocation to achieve the best outcomes in AI control. We could use AI to validate other AI models, but we also need to balance the costs and benefits relating to AI control,” Zhu says.
“For high-risk AI systems whose outputs may significantly impact individuals, companies should adopt a human-in-the-loop approach to ensure human operators control the decision-making process to mitigate potential errors or improper outputs from AI models,” says Kwon.
Strong data privacy protection is the third element in improving AI governance. AI models may use publicly available texts, including personal data, and this is in conflict with the data minimization principle. Furthermore, there is a data subject rights issue. When users unintentionally feed personal data into some generative AI models, the information becomes embedded in large language models and cannot be easily erased.
“Hong Kong’s data remains largely open to global access, with few restrictions on cross-border data flows. Such level of exposure necessitates a robust risk management framework,” suggests Zhu.
The Deloitte partner says that as Hong Kong companies can access international and local AI models, apps and agents, it could allow malicious actors to exploit AI tools and data, potentially increasing the risk of privacy violations and deepfake-related crimes.
Kwon believes that from a data protection perspective, companies should carefully assess the risks associated with AI systems at the outset.
The overarching law governing data protection in Hong Kong is the Personal Data (Privacy) Ordinance. Enterprises and organizations are expected to erase personal data in an AI system when it is no longer needed for AI development or use, in compliance with the data protection principles under the ordinance.
In April, the Digital Policy Office issued the Hong Kong Generative Artificial Intelligence Technical and Application Guideline to address risks such as data leakage, model bias and misinformation. It introduced the Ethical Artificial Intelligence Framework in 2023, which requires government bureaus and departments to incorporate ethical elements when adopting AI and big data analysis to plan, design and implement information technology projects or services.
Last year, the Office of the Privacy Commissioner for Personal Data published Artificial Intelligence: Model Personal Data Protection Framework. Based on general business processes, the framework provides recommendations to help organizations procure, implement, and use AI systems that involve personal data.
Bridging AI standards
Internationally, there is still no unified standard for governing AI.
Experts say Hong Kong should proactively formulate global AI governance standards by stepping up its engagement with global organizations and promoting AI conferences in Hong Kong.
In Fan’s view, Hong Kong should actively participate in activities launched by multilateral organizations like the Organization for Economic Cooperation and Development and the United Nations through bilateral partnerships to help the city stay ahead on best practices, data flows, and research initiatives.
Hong Kong could leverage international mechanisms like the Asian-African Legal Consultative Organization, which has established a regional arbitration center in the city, to engage Global South countries and others to achieve a consensus on AI governance, says Chan.
Hong Kong has legal experts who are familiar with common law, continental law and the Islamic legal system who could help draft an AI regulatory standard that would be recognized internationally, he says.
Zhu affirms that Hong Kong’s unique position makes it an ideal contributor to international AI governance efforts. “By combining the advantages of Western technological frameworks and Chinese regulatory standards, the city could play a contributing role in shaping global AI standards,” Zhu says.
In the rapidly evolving world of mobile computing, Arm Holdings has unveiled its latest innovation, the Lumex Compute Subsystem (CSS), a platform designed to supercharge on-device artificial intelligence capabilities in smartphones, wearables, and other consumer devices. Announced on September 10, 2025, this new architecture promises to deliver unprecedented performance gains, enabling AI tasks to run locally without relying on cloud servers. By integrating advanced CPUs, GPUs, and system interconnects optimized for AI workloads, Lumex addresses the growing demand for privacy-focused, real-time intelligence in everyday gadgets.
At the heart of Lumex are SME2-enabled Armv9.3 cores, which support scalable matrix extensions crucial for handling complex AI models. These cores, paired with the new Mali G1-Ultra GPU, offer up to 5x faster AI processing compared to previous generations, according to details shared in Arm’s official newsroom announcement. The platform also incorporates a redesigned System Interconnect and System Memory Management Unit, reducing latency by as much as 75% to ensure smoother operation of AI-driven features like real-time language translation or augmented reality overlays.
Architectural Innovations Driving Efficiency
Beyond raw power, Lumex emphasizes energy efficiency, a critical factor for battery-constrained mobile devices. The subsystem’s channelized architecture prioritizes quality-of-service for AI traffic, allowing developers to run larger models on-device without excessive power draw. As reported by The Register, this design represents Arm’s strategic pivot toward CPU-based AI acceleration, distinguishing it from competitors who lean heavily on dedicated neural processing units.
Industry analysts note that Lumex’s four tailored variants, built on advanced 3nm processes, cater to a range of devices from flagship smartphones to smartwatches. This flexibility could accelerate adoption by chipmakers like Qualcomm and MediaTek, who license Arm’s designs. Posts on X from tech enthusiasts, including those highlighting Arm’s collaboration with frameworks like KleidiAI, underscore the platform’s developer-friendly tools that integrate seamlessly with major operating systems, enabling apps to leverage on-device AI from launch.
Implications for AI in Consumer Tech
The push for on-device AI aligns with broader industry trends toward data privacy and reduced latency. Unlike cloud-dependent systems, Lumex allows for “smarter, faster, more personal AI,” as described in Reuters, potentially transforming user experiences in gaming and real-time analytics. For instance, the platform’s double-digit IPC gains—estimated at 20% performance uplift with 9% better efficiency—could enable immersive graphics in mobile games while processing AI tasks like object recognition in the background.
However, challenges remain. Integrating such advanced hardware requires ecosystem support, and Arm has been proactive, working with developers to optimize frameworks for these optimizations. Recent news from HotHardware emphasizes how Lumex’s GPU enhancements, including ray tracing support, position it as a boon for flagship devices, potentially appearing in next year’s smartphones.
Market Impact and Future Outlook
Arm’s dominance in mobile chip design—powering over 95% of smartphones—gives Lumex a strong foothold. According to Silicon Republic, this launch comes amid intensifying competition from rivals like Apple and Google, who are also advancing on-device AI. X discussions, such as those from Arm’s own account, highlight up to 5x AI speedups, fueling speculation about its role in emerging tech like AI agents in wearables.
Looking ahead, Lumex could reshape how AI integrates into daily life, from personalized assistants to secure edge computing. Yet, as Liliputing points out, success hinges on software ecosystems catching up. With Arm betting big on this platform, it may well define the next era of mobile innovation, balancing power, efficiency, and accessibility for billions of users worldwide.
Google Cloud CEO Thomas Kurian paints a rosy picture for the cloud service provider. During a Goldman Sachs technology conference in San Francisco, he said that the company has approximately $106 billion in contracts outstanding. According to him, more than half of that can be converted into revenue in the next two years.
In the second quarter of 2025, parent company Alphabet reported $13.6 billion in revenue for Google Cloud, an increase of 32 percent over the previous year. If the forecast is correct, according to The Register, this means that the cloud service provider could add around $53 billion in additional revenue by 2027.
Google Cloud’s market position is often compared to that of its biggest rivals. Microsoft reported annual revenue of $75 billion for Azure this year, while AWS recorded $30.9 billion in the same quarter, a growth of 17.5 percent.
Faster transition to the cloud
Kurian emphasized that many companies still run IT systems on-premises. He expects the transition to the cloud to accelerate, with artificial intelligence playing a decisive role. Increasingly, customers are looking for suppliers who can help transform their business operations with AI applications, rather than just hosting services.
Google claims to have an advantage in this regard thanks to its own investments in AI infrastructure. Its systems are said to be more energy-efficient and deliver more computing power than those of its competitors. According to Kurian, the storage and network are also designed in such a way that they can easily switch from training to inference.
For investors, the most important thing is how AI is converted into revenue. Kurian mentioned usage-based rates, subscriptions, and value-based models, such as paying per saved service request or higher ad conversions. In addition, AI use leads to increased purchases of security and data services.
According to Kurian, 65 percent of customers now use Google Cloud AI tools. On average, this group purchases more products than organizations that do not yet use AI. Examples of applications include digital product development, customer service, back-office processes, and IT support. For example, Google helped Warner Bros. re-edit The Wizard of Oz for the Las Vegas Sphere, and Home Depot uses AI to answer HR questions more quickly.
Kurian’s message: cloud infrastructure only becomes truly profitable when companies purchase AI services on top of it. With this, Google Cloud wants to position itself firmly in the next phase of the cloud market.
In a move that could reshape drug discovery, researchers at Harvard Medical School have designed an artificial intelligence model capable of identifying treatments that reverse disease states in cells.
Unlike traditional approaches that typically test one protein target or drug at a time in hopes of identifying an effective treatment, the new model, called PDGrapher and available for free, focuses on multiple drivers of disease and identifies the genes most likely to revert diseased cells back to healthy function.
The tool also identifies the best single or combined targets for treatments that correct the disease process. The work, described Sept. 9 in Nature Biomedical Engineering, was supported in part by federal funding.
By zeroing in on the targets most likely to reverse disease, the new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods, the researchers noted.
“Traditional drug discovery resembles tasting hundreds of prepared dishes to find one that happens to taste perfect,” said study senior author Marinka Zitnik, associate professor of biomedical informatics in the Blavatnik Institute at HMS. “PDGrapher works like a master chef who understands what they want the dish to be and exactly how to combine ingredients to achieve the desired flavor.”
The traditional drug-discovery approach — which focuses on activating or inhibiting a single protein — has succeeded with treatments such as kinase inhibitors, drugs that block certain proteins used by cancer cells to grow and divide. However, Zitnik noted, this discovery paradigm can fall short when diseases are fueled by the interplay of multiple signaling pathways and genes. For example, many breakthrough drugs discovered in recent decades — think immune checkpoint inhibitors and CAR T-cell therapies — work by targeting disease processes in cells.
The approach enabled by PDGrapher, Zitnik said, looks at the bigger picture to find compounds that can actually reverse signs of disease in cells, even if scientists don’t yet know exactly which molecules those compounds may be acting on.
How PDGrapher works: Mapping complex linkages and effects
PDGrapher is a type of artificial intelligence tool called a graph neural network. This tool doesn’t just look at individual data points but at the connections that exist between these data points and the effects they have on one another.
In the context of biology and drug discovery, this approach is used to map the relationship between various genes, proteins, and signaling pathways inside cells and predict the best combination of therapies that would correct the underlying dysfunction of a cell to restore healthy cell behavior. Instead of exhaustively testing compounds from large drug databases, the new model focuses on drug combinations that are most likely to reverse disease.
PDGrapher points to parts of the cell that might be driving disease. Next, it simulates what happens if these cellular parts were turned off or dialed down. The AI model then offers an answer as to whether a diseased cell would happen if certain targets were “hit.”
“Instead of testing every possible recipe, PDGrapher asks: ‘Which mix of ingredients will turn this bland or overly salty dish into a perfectly balanced meal?’” Zitnik said.
Advantages of the new model
The researchers trained the tool on a dataset of diseased cells before and after treatment so that it could figure out which genes to target to shift cells from a diseased state to a healthy one.
Next, they tested it on 19 datasets spanning 11 types of cancer, using both genetic and drug-based experiments, asking the tool to predict various treatment options for cell samples it had not seen before and for cancer types it had not encountered.
The tool accurately predicted drug targets already known to work but that were deliberately excluded during training to ensure the model did not simply recall the right answers. It also identified additional candidates supported by emerging evidence. The model also highlighted KDR (VEGFR2) as a target for non-small cell lung cancer, aligning with clinical evidence. It also identified TOP2A — an enzyme already targeted by approved chemotherapies — as a treatment target in certain tumors, adding to evidence from recent preclinical studies that TOP2A inhibition may be used to curb the spread of metastases in non-small cell lung cancer.
The model showed superior accuracy and efficiency, compared with other similar tools. In previously unseen datasets, it ranked the correct therapeutic targets up to 35 percent higher than other models did and delivered results up to 25 times faster than comparable AI approaches.
What this AI advance spells for the future of medicine
The new approach could optimize the way new drugs are designed, the researchers said. This is because instead of trying to predict how every possible change would affect a cell and then looking for a useful drug, PDGrapher right away seeks which specific targets can reverse a disease trait. This makes it faster to test ideas and lets researchers focus on fewer promising targets.
This tool could be especially useful for complex diseases fueled by multiple pathways, such as cancer, in which tumors can outsmart drugs that hit just one target. Because PDGrapher identifies multiple targets involved in a disease, it could help circumvent this problem.
Additionally, the researchers said that after careful testing to validate the model, it could one day be used to analyze a patient’s cellular profile and help design individualized treatment combinations.
Finally, because PDGrapher identifies cause-effect biological drivers of disease, it could help researchers understand why certain drug combinations work — offering new biological insights that could propel biomedical discovery even further.
The team is currently using this model to tackle brain diseases such as Parkinson’s and Alzheimer’s, looking at how cells behave in disease and spotting genes that could help restore them to health. The researchers are also collaborating with colleagues at the Center for XDP at Massachusetts General Hospital to identify new drug targets and map which genes or pairs of genes could be affected by treatments for X-linked Dystonia-Parkinsonism, a rare inherited neurodegenerative disorder.
“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level,” Zitnik said.
Reference:Gonzalez G, Lin X, Herath I, Veselkov K, Bronstein M, Zitnik M. Combinatorial prediction of therapeutic perturbations using causally inspired neural networks. Nat Biomed Eng. 2025:1-18. doi: 10.1038/s41551-025-01481-x
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.