Connect with us

Business

Navigating Hong Kong’s New Generative AI Guidelines: Key Considerations For Businesses – New Technology

Published

on


H

Hauzen




Hauzen LLP is a Hong Kong law firm with a reputation for excellence, in-depth knowledge of the financial services and fintech markets, and lateral-thinking lawyers with real business experience.

We have been recognized by IFLR1000, Asian Legal Business and Asialaw in our areas of expertise and our reputation for excellence.

We operate in association with AnJie Broad, a leading Mainland Chinese law firm consistently ranked by Chambers Global in the top bands for insurance law, competition law, intellectual property, litigation, and corporate law.

We focus on the laws and regulations governing the Hong Kong financial markets, including the fast-growing area of digital finance.

Our partners and senior lawyers are renown legal experts in their respective fields and have first-hand knowledge of the legal and business issues that market participants face.



On 15 April 2025, Hong Kong’s Digital Policy Office (“DPO”) took a significant step in shaping the future of artificial intelligence (“AI”) governance with the release of the Generative Artificial Intelligence Technical and Application Guideline (the “Guideline”).


Hong Kong
Technology


To print this article, all you need is to be registered or login on Mondaq.com.

On 15 April 2025, Hong Kong’s Digital Policy Office
(“DPO“) took a significant step in
shaping the future of artificial intelligence
(“AI“) governance with the release of
the
Generative Artificial Intelligence Technical and
Application Guideline
(the
Guideline“). Developed in collaboration
with the Hong Kong Generative AI Research and Development Center,
this framework aims to balance innovation with accountability,
offering a roadmap for businesses to adopt generative AI
responsibly.

Five Dimensions of Governance

The Guideline emphasizes five pillars of ethical AI
governance:

  1. Personal Data Privacy: Key aspects of privacy
    in AI include data collection, accuracy, retention, usage,
    security, transparency, and access. Ensuring privacy and security
    throughout the AI lifecycle is crucial for protecting individual
    rights, maintaining public trust, and supporting the sustainable
    development of the AI industry.

  2. Intellectual Property Protection: The rapid
    development of generative AI presents both opportunities and
    challenges to intellectual property systems, particularly
    concerning the use of copyrighted materials for AI training.

  3. Crime Prevention: Generative AI enhances crime
    prevention and control but also introduces governance challenges,
    such as the misuse of deepfakes for fraud and misinformation.
    Effective implementation requires ethical considerations,
    transparency, and public trust to align with societal values.

  4. Reliability and Trustworthiness: The
    credibility of generative AI hinges on its ability to consistently
    produce accurate and reliable results, with a robust framework
    ensuring accountability for developers, operators, and users.
    However, the complexity and opacity of its technical architecture
    pose significant challenges to maintaining trustworthiness and
    effectively addressing issues like algorithmic biases and erroneous
    outputs.

  5. System Security: System security in generative
    AI is crucial to prevent unauthorized access and data compromise,
    but risks like reverse attacks and data poisoning pose significant
    risks. Implementing strict data verification, anomaly detection,
    and secure transmission channels can help mitigate these threats
    and ensure safe and stable AI operations.

Practical Guide for Stakeholders

The Guideline categorizes obligations for three key groups:

Technology Developers

  • Establish a well-structed generative AI development team,
    including a data team, an algorithm engineering team, a quality
    control team, and a compliance team.

  • Develop policies on when to accept AI-generated content, such
    as requiring users to doublecheck AI-generated materials, verify
    references, and ensure correctness before usage.

  • Follow higher standards and apply independent evaluation
    mechanisms from the development stage.

Service Providers

  • Establish a responsible service framework to ensure service
    compliance, data security, system security and system
    credibility.

  • Develop responsible processes, including clear financial and
    service security agreements, comprehensive risk assessments at
    various stages of service development, small-scale pilot projects
    before rolling out services on a large scale, continuous service
    improvement and transparent communication with stakeholders.

Service Users

  • Use generative AI services in a legal and regulated manner
    while maintaining independent discretion.

  • Understand responsibilities and obligations such as privacy,
    security, and legal compliance before engaging with generative AI
    services. Explicitly indicate whether generative AI has been used
    in content generation or decision-making to ensure transparency and
    accountability.

  • Familiarize themselves with the privacy policies of generative
    AI services regarding data protection, use and sharing before using
    the services. Assess and be responsible for the content produced by
    generative AI and disclose its source when making it public.

  • Respect intellectual property rights and apply necessary
    technical measures to avoid generating content that constitutes the
    whole or substantial copying of copyrighted works to prevent
    copyright disputes.

The Guideline aims to strike a balance between fostering AI
innovation and ensuring responsible deployment. It establishes a
governance framework tailored to Hong Kong’s unique
environment, addressing potential risks while encouraging the
widespread adoption of generative AI. Businesses operating in Hong
Kong should take note of the Guideline to ensure compliance and
capitalize on the opportunities presented by AI technologies.

While the Guideline does not have the force of law, it serves as
a timely reminder that the deployment and use of AI tool carries
with it a number of legal and ethical risks. In most jurisdictions
currently, AI is lightly regulated or completely unregulated. As a
fast-developing phenomenon, AI risks are different to predict,
given its broad range of potential applications. Nonetheless, the
Guideline is a useful first step in laying out some fundamental
principles to adopt in deploying or using AI tools.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

OBR says pension triple lock to cost three times initial estimate

Published

on


Kevin Peachey

Cost of living correspondent

Getty Images Older man and woman sit at a kitchen table with paperwork and a laptop in front of themGetty Images

The cost of the state pension triple lock is forecast to be three times’ higher by the end of the decade than its original estimate, according to the government’s official forecaster.

The triple lock, which came into force in 2011, means that the state pension rises each year in line with either inflation, wage increases or 2.5% – whichever is highest.

The Office for Budget Responsibility (OBR) said the annual cost is estimated to reach £15.5bn by 2030.

Overall, the OBR said the UK’s public finances were in a “relatively vulnerable position” owing to pressure from recent government U-turns on planned spending cuts.

The recent reversal of proposed the welfare bill, on top of restoring winter fuel payments for most claimants, have contributed to a continued rise in government debt, according to the report.

It said: “Efforts to put the UK’s public finances on a more sustainable footing have met with only limited and temporary success in recent years in the aftermath of the shocks, debt has also continued to rise and borrowing remained elevated because governments have reversed plans to consolidate the public finances.

“Planned tax rises have been reversed, and, more significantly, planned spending reductions have been abandoned.”

Spending on the state pension has steadily risen, the OBR said, because the triple lock and a growing number of people above the state pension age was contributing to costs.

It added: “Due to inflation and earnings volatility over its first two decades in operation, the triple lock has cost around three times more than initial expectations.”

Pensioner protection

The UK’s state pension is the second-largest item in the government budget after health.

In 2011, the Conservative-Liberal Democrat coalition brought in the triple lock to ensure the value of the state pension was not overtaken by the increase in the cost of living or the incomes of working people.

Since then, the non-earnings-linked element of the lock has been triggered “in eight of the 13 years to date”, the OBR pointed out.

That was because inflation “has turned out to be significantly more volatile” than expected.

In April 2025, the earnings link meant the state pension increased by 4.1%, making it worth:

  • £230.25 a week for the full, new flat-rate state pension (for those who reached state pension age after April 2016) – a rise of £472 a year
  • £176.45 a week for the full, old basic state pension (for those who reached state pension age before April 2016) – a rise of £363 a year

Chancellor Rachel Reeves has said the Labour government will keep the triple lock until the end of the current Parliament.

However, before and since that manifesto promise, there has been intense debate over the cost of the triple lock and whether it is justified.

Last week, the influential Institute for Fiscal Studies, an independent economic think-tank, suggested the triple lock be scrapped as part of a wider overhaul of pensions.

It argued that it should rise in line with prices, but the cost should be linked to a target level of economy-wide average earnings.

Pensioner groups say many older people face high living costs and need the protection of the triple lock to avoid them falling further into financial difficulty, especially because the amount actually paid was far from the most generous state pension in Europe.



Source link

Continue Reading

Business

Pluriva Invests €250K in AI Virtual Assistant for Romanian Firms

Published

on









Pluriva Invests €250K in AI Virtual Assistant for Romanian Firms – The Romania Journal





























Source link

Continue Reading

Business

AI in healthcare: What business leaders need to know

Published

on


Artificial intelligence may seem like a new, untested technology, but the reality is that AI is already integrated into our everyday lives. For instance, Siri, Amazon Alexa and Google Assistant use natural language processing and natural language understanding to analyze and respond to voice commands. Emails and text messages use NLP for predictive text and auto correct.

The rapid development of AI brings with it enormous concerns, especially regarding its applications in healthcare. However, AI is already transforming patient care in positive ways, for example, by making it easier for clinicians to diagnose and treat illness sooner, potentially reducing the need for costly specialized treatment or hospitalization.

Read more:  Sick of answering the same benefits questions from employees? Let AI do the work

Chronic condition management and early detection

While clinical judgment by an actual human is still critical to ensuring patients receive the best possible care, AI can support clinicians and their decision-making by providing a more complete view of patient health.

For instance, radiologists are now using AI to more accurately analyze X-rays, MRIs, CT scans and mammograms. AI’s sensitivity to distinguish slight changes from image to image can help detect chronic diseases earlier and more accurately. In one study, researchers found an AI system could predict diagnoses of Berger’s kidney disease more accurately than trained nephrologists. In an attempt to slow the progression of kidney disease among veterans, such as Berger’s disease, the Veterans Administration partnered with DeepMind, an AI research lab, to identify risk predictors for patient deterioration and alert clinicians early. DeepMind developed an AI model based on electronic health records from the Veterans Administration that identified 90% of all acute kidney injuries that required subsequent dialysis, with a lead time of up to 48 hours. 

Earlier intervention in the case of Berger’s disease and other kidney conditions significantly impacts the economic burden of the disease, potentially saving plan sponsors between $276.80-$480.79 per member per month. 

Read more:  AI can help benefit leaders with the compensation process

Automating administrative tasks

One of AI’s greatest assets is its ability to quickly assess large volumes of data to optimize clinical and administrative time. Medical practices are utilizing AI-enabled technology to improve administrative efficiency and patient care. Automated documentation tools can reduce the time physicians spend on patient charting by 72%, which means physicians can spend more time treating and diagnosing plan members. AI can also integrate with electronic health records to pull relevant data, identify missing information and complete and submit prior authorization forms on behalf of providers.

Administrative expenses account for 15% to 25% of total healthcare expenditures. Reducing administrative overhead and claims errors, along with early diagnosis and treatment of chronic disease, can improve member outcomes and produce impressive cost savings for plan sponsors. AI has the potential to save $265 billion in overall healthcare costs by eliminating administrative overhead and documentation errors.

AI’s ability to process vast quantities of data also benefits health plan administrators. Plan sponsors can implement AI tools that provide members with personalized treatment and support, identify health plans during enrollment that best fit specific member needs and determine additional benefits for members and their families. 

Read more:  Leaders share their most popular summer benefits

Overcoming barriers to adoption

Despite its potential to reduce healthcare costs, improve patient outcomes and improve member experience, AI adoption is still slow. The initial investment required to implement AI can be high, and it includes the cost of the technology, staff training, system integration and maintenance of AI models, not to mention potential liability concerns. 

When considering utilizing AI for the purposes of improving efficiency and outcomes, organizations in the healthcare industry are: 

  • Analyzing how AI solutions can support their population, and which modalities are likely to be (or have proven to be) successful
  • Consulting with internal stakeholders from the beginning to identify potential challenges to adoption
  • Evaluating potential cost savings and member outcomes
  • Considering the quality and source of data used to train AI models
  • Ensuring AI tools meet HIPAA requirements

AI in healthcare is no longer an idea of the future. It is here and already making significant improvements in patient outcomes. However, AI is dependent on data quality and clearly defined learning parameters to eliminate potential bias and make accurate predictions. Organizations must also weigh other risks associated with AI, such as informed consent issues that may arise if patients do not fully understand how their information is being used.



Source link

Continue Reading

Trending