Business
Navigating Hong Kong’s New Generative AI Guidelines: Key Considerations For Businesses – New Technology
On 15 April 2025, Hong Kong’s Digital Policy Office (“DPO”) took a significant step in shaping the future of artificial intelligence (“AI”) governance with the release of the Generative Artificial Intelligence Technical and Application Guideline (the “Guideline”).
Hong Kong
Technology
To print this article, all you need is to be registered or login on Mondaq.com.
On 15 April 2025, Hong Kong’s Digital Policy Office
(“DPO“) took a significant step in
shaping the future of artificial intelligence
(“AI“) governance with the release of
the
Generative Artificial Intelligence Technical and
Application Guideline (the
“Guideline“). Developed in collaboration
with the Hong Kong Generative AI Research and Development Center,
this framework aims to balance innovation with accountability,
offering a roadmap for businesses to adopt generative AI
responsibly.
Five Dimensions of Governance
The Guideline emphasizes five pillars of ethical AI
governance:
- Personal Data Privacy: Key aspects of privacy
in AI include data collection, accuracy, retention, usage,
security, transparency, and access. Ensuring privacy and security
throughout the AI lifecycle is crucial for protecting individual
rights, maintaining public trust, and supporting the sustainable
development of the AI industry. - Intellectual Property Protection: The rapid
development of generative AI presents both opportunities and
challenges to intellectual property systems, particularly
concerning the use of copyrighted materials for AI training. - Crime Prevention: Generative AI enhances crime
prevention and control but also introduces governance challenges,
such as the misuse of deepfakes for fraud and misinformation.
Effective implementation requires ethical considerations,
transparency, and public trust to align with societal values. - Reliability and Trustworthiness: The
credibility of generative AI hinges on its ability to consistently
produce accurate and reliable results, with a robust framework
ensuring accountability for developers, operators, and users.
However, the complexity and opacity of its technical architecture
pose significant challenges to maintaining trustworthiness and
effectively addressing issues like algorithmic biases and erroneous
outputs. - System Security: System security in generative
AI is crucial to prevent unauthorized access and data compromise,
but risks like reverse attacks and data poisoning pose significant
risks. Implementing strict data verification, anomaly detection,
and secure transmission channels can help mitigate these threats
and ensure safe and stable AI operations.
Practical Guide for Stakeholders
The Guideline categorizes obligations for three key groups:
Technology Developers
- Establish a well-structed generative AI development team,
including a data team, an algorithm engineering team, a quality
control team, and a compliance team. - Develop policies on when to accept AI-generated content, such
as requiring users to doublecheck AI-generated materials, verify
references, and ensure correctness before usage. - Follow higher standards and apply independent evaluation
mechanisms from the development stage.
Service Providers
- Establish a responsible service framework to ensure service
compliance, data security, system security and system
credibility. - Develop responsible processes, including clear financial and
service security agreements, comprehensive risk assessments at
various stages of service development, small-scale pilot projects
before rolling out services on a large scale, continuous service
improvement and transparent communication with stakeholders.
Service Users
- Use generative AI services in a legal and regulated manner
while maintaining independent discretion.
- Understand responsibilities and obligations such as privacy,
security, and legal compliance before engaging with generative AI
services. Explicitly indicate whether generative AI has been used
in content generation or decision-making to ensure transparency and
accountability.
- Familiarize themselves with the privacy policies of generative
AI services regarding data protection, use and sharing before using
the services. Assess and be responsible for the content produced by
generative AI and disclose its source when making it public.
- Respect intellectual property rights and apply necessary
technical measures to avoid generating content that constitutes the
whole or substantial copying of copyrighted works to prevent
copyright disputes.
The Guideline aims to strike a balance between fostering AI
innovation and ensuring responsible deployment. It establishes a
governance framework tailored to Hong Kong’s unique
environment, addressing potential risks while encouraging the
widespread adoption of generative AI. Businesses operating in Hong
Kong should take note of the Guideline to ensure compliance and
capitalize on the opportunities presented by AI technologies.
While the Guideline does not have the force of law, it serves as
a timely reminder that the deployment and use of AI tool carries
with it a number of legal and ethical risks. In most jurisdictions
currently, AI is lightly regulated or completely unregulated. As a
fast-developing phenomenon, AI risks are different to predict,
given its broad range of potential applications. Nonetheless, the
Guideline is a useful first step in laying out some fundamental
principles to adopt in deploying or using AI tools.
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
Business
OBR says pension triple lock to cost three times initial estimate
Cost of living correspondent
The cost of the state pension triple lock is forecast to be three times’ higher by the end of the decade than its original estimate, according to the government’s official forecaster.
The triple lock, which came into force in 2011, means that the state pension rises each year in line with either inflation, wage increases or 2.5% – whichever is highest.
The Office for Budget Responsibility (OBR) said the annual cost is estimated to reach £15.5bn by 2030.
Overall, the OBR said the UK’s public finances were in a “relatively vulnerable position” owing to pressure from recent government U-turns on planned spending cuts.
The recent reversal of proposed the welfare bill, on top of restoring winter fuel payments for most claimants, have contributed to a continued rise in government debt, according to the report.
It said: “Efforts to put the UK’s public finances on a more sustainable footing have met with only limited and temporary success in recent years in the aftermath of the shocks, debt has also continued to rise and borrowing remained elevated because governments have reversed plans to consolidate the public finances.
“Planned tax rises have been reversed, and, more significantly, planned spending reductions have been abandoned.”
Spending on the state pension has steadily risen, the OBR said, because the triple lock and a growing number of people above the state pension age was contributing to costs.
It added: “Due to inflation and earnings volatility over its first two decades in operation, the triple lock has cost around three times more than initial expectations.”
Pensioner protection
The UK’s state pension is the second-largest item in the government budget after health.
In 2011, the Conservative-Liberal Democrat coalition brought in the triple lock to ensure the value of the state pension was not overtaken by the increase in the cost of living or the incomes of working people.
Since then, the non-earnings-linked element of the lock has been triggered “in eight of the 13 years to date”, the OBR pointed out.
That was because inflation “has turned out to be significantly more volatile” than expected.
In April 2025, the earnings link meant the state pension increased by 4.1%, making it worth:
- £230.25 a week for the full, new flat-rate state pension (for those who reached state pension age after April 2016) – a rise of £472 a year
- £176.45 a week for the full, old basic state pension (for those who reached state pension age before April 2016) – a rise of £363 a year
Chancellor Rachel Reeves has said the Labour government will keep the triple lock until the end of the current Parliament.
However, before and since that manifesto promise, there has been intense debate over the cost of the triple lock and whether it is justified.
Last week, the influential Institute for Fiscal Studies, an independent economic think-tank, suggested the triple lock be scrapped as part of a wider overhaul of pensions.
It argued that it should rise in line with prices, but the cost should be linked to a target level of economy-wide average earnings.
Pensioner groups say many older people face high living costs and need the protection of the triple lock to avoid them falling further into financial difficulty, especially because the amount actually paid was far from the most generous state pension in Europe.
Business
Pluriva Invests €250K in AI Virtual Assistant for Romanian Firms
Business
AI in healthcare: What business leaders need to know
Read more:
Chronic condition management and early detection
While clinical judgment by an actual human is still critical to ensuring patients receive the best possible care, AI can support clinicians and their decision-making by providing a more complete view of patient health.
For instance, radiologists are now using AI to more
Earlier intervention in the case of Berger’s disease and other kidney conditions significantly impacts the economic burden of the disease, potentially saving plan sponsors between
Read more:
Automating administrative tasks
One of AI’s greatest assets is its ability to quickly assess large volumes of data to optimize clinical and administrative time. Medical practices are utilizing AI-enabled technology to improve administrative efficiency and patient care. Automated documentation tools can reduce the time physicians spend on
Administrative expenses account for 15% to 25% of
AI’s ability to process vast quantities of data also benefits health plan administrators. Plan sponsors can implement AI tools that provide members with personalized treatment and support, identify health plans during enrollment that best fit specific member needs and determine additional benefits for members and their families.
Read more:
Overcoming barriers to adoption
Despite its potential to reduce healthcare costs, improve patient outcomes and improve member experience, AI adoption is still slow. The initial investment required to implement AI can be high, and it includes the cost of the technology, staff training, system integration and maintenance of AI models, not to mention potential liability concerns.
When considering utilizing AI for the purposes of improving efficiency and outcomes, organizations in the healthcare industry are:
- Analyzing how AI solutions can support their population, and which modalities are likely to be (or have proven to be) successful
- Consulting with internal stakeholders from the beginning to identify potential challenges to adoption
- Evaluating potential cost savings and member outcomes
- Considering the quality and source of data used to train AI models
- Ensuring AI tools meet HIPAA requirements
AI in healthcare is no longer an idea of the future. It is here and already making significant improvements in patient outcomes. However, AI is dependent on data quality and clearly defined learning parameters to eliminate potential bias and make accurate predictions. Organizations must also weigh other risks associated with AI, such as informed consent issues that may arise if patients do not fully understand how their information is being used.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business5 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers5 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit