Connect with us

Business

Contracts to manage AI risk – The Royal Gazette

Published

on


The BMA’s 2025 Business Plan signalled future policies concerning the risk-managed use of AI by its registrants

This is the first of a two-part article on how artificial intelligence contracts can be used to manage the development and use risks associated with such transformative technology.

All transformational technology is fraught with risk. However, when it comes to AI, Elon Musk said “we are communing with the devil”.

As a commercial lawyer, I like to think that fables about contracts with the devil exist because contracts are the ultimate risk management tool.

Regulators around the world, including the Bermuda Monetary Authority, fully appreciate the importance of using contracts to mitigate, if not avoid, the risks associated with the development and use of transformative technologies.

The BMA’s 2025 Business Plan signalled future policies concerning the risk-managed use of AI by its registrants.

That plan stated that the BMA was “undertaking a review of the Insurance Code of Conduct and the Operational Cybersecurity Code of Conduct to consider the merits of integrating specific guidelines on the use of AI and machine-learning systems”.

As with the BMA’s regulatory requirements for outsourcing and cybersecurity governance, I fully expect that the evolution of those current regulations will include additional risk management guidance involving AI contracts that are relied upon by Bermuda’s financial service sector.

In that regard, the recent emergence of model contracts for all sectors to manage the many risks of AI has been striking.

Among the variations of AI model contracts that I have consulted, there are two that stand out.

In 2023, Britain’s Society for Computers and Law published a 59-page White Paper titled Artificial Intelligence Contractual Clauses, and recently the Digital Transformation Agency of the Australian Government published [AI] Model Clauses, Version 2.0. Both are excellent.

Both organisations take a pragmatic approach to crafting contractual provisions that specifically address the commercial and legal risks of AI development, commercialisation and use. There is nothing abstractly academic about that guidance.

The commercial risks associated with all transformative technology, including AI, include the risks that:

• The technology doesn’t perform the way that the vendor promised it would

• Due diligence is difficult to undertake on products and vendors that are new to the market

• The solution’s operation may not be compatible, interoperable or easily integrated with legacy systems

• The solution’s performance reliability is yet to be proven

To address those “new-to-market” risks, both the SCL and DTA recommend that AI contracts include terms that:

• Define the operational and functional specifications of the solution in precise and empirically verifiable terms

• Require either a vendor-led AI demonstration or an operational demonstration within the customer’s infrastructure

• Require acceptance testing as a precondition to contract effectiveness and any licence fee payments

• Stipulate a warranty (of reasonable duration) concerning the solution’s “on spec” operation and that requires expedited defect remediation

Where AI is offered as a service rather than as licensed software, the contract should also address the usual risks that are associated with:

• The different variations of cloud or distributive computing

• Any jurisdictional export control restrictions

• Compliance with all privacy laws, including export restrictions

• The service provider’s compliance with all applicable law, including outsourcing and cybersecurity regulations

• Subcontracting restrictions

• A prohibition on the re-export of data to other jurisdictions

Since many AI solutions are powerful search agents that function as scrapers and “crawler bots”, two of the most prominent and serious AI risks to contractually address are the misappropriation of personal (and often confidential) information that the AI solution accesses, views and copies or uses, and the unlicensed reprography and misappropriation of third-party intellectual property.

As intelligent as AI may appear, it may be unable to identify data and content that is the property of others.

Based on the AI copyright infringement cases that are now before the courts in the US and Britain, AI contracts should include broadly drafted third-party non-infringement covenants as well as indemnities to protect users from such third-party liability. That approach to manage the risk of intellectual property infringement is required for all content or data that AI finds, fetches and brings back to the doorstep.

More specifically, the SCL and DTA suggest that AI contracts include covenants to:

• Ensure that the AI provides only original work

• Ensure that AI does not merely customise, enhance or create derivative works of someone else’s property

• Address whether the service vendor owns the AI or the AI otherwise relies on “open source” software

• Provide that neither the use nor operation of the AI will breach any third-party rights, including any contractual, privacy, intellectual property or statutory rights

Next week, in part two, I will identify additional development and use risks that AI brings, and the contractual terms that are necessary to address those risks.

Duncan Card is a partner at Appleby who specialises in information technology and outsourcing contracts, privacy law and cybersecurity compliance in Bermuda. A copy of this column can be obtained on the Appleby website atwww.applebyglobal.com. This column should not be used as a substitute for professional legal advice. Before proceeding with any matters discussed here, consult a lawyer



Source link

Business

South Korea to probe potential human rights abuses in US raid

Published

on


The South Korean government says it is investigating potential human rights violations during the raid and detention of Korean workers by US authorities.

South Korea has expressed “strong regret” to the US and has officially asked that its citizens’ rights and interests are not infringed during law enforcement proceedings, said a presidential spokesperson on Monday.

More than 300 South Korean workers returned home on Friday after being held for a week following a raid at an electric vehicle battery plant in the US state of Georgia.

The incident has tested ties between the countries, even as South Korean firms are set to invest billions in America under a trade deal to avoid steep US tariffs.

South Korean authorities will work with the relevant companies to “thoroughly investigate any potential human rights violations or other issues”, said the presidential spokesperson during a press briefing.

The raid has raised tensions between the US and South Korea, where many of those detained were from, with President Lee Jae-myung warning that it will discourage foreign investment into the US.

He called the situation “bewildering”, adding that it is a common practice for Korean companies to send workers to help set up overseas factories.

Last week, Hyundai said the plant’s opening will be delayed by at least two months.

South Korea’s trade unions have called on Trump to issue an official apology.

On 4 September, around 475 people – mostly South Korean nationals – were arrested at a Hyundai-operated plant, in what marked the largest single-location immigration raid since US President Donald Trump launched a crackdown on illegal migrants earlier this year.

Immigration and Customs Enforcement (ICE) officials said the South Koreans had overstayed their visas or were not permitted to work in the US.

A South Korean worker who witnessed the raid told the BBC of panic and confusion as federal agents descended on the site, with some people being led away in chains.

Trump has said foreign workers sent to the country are “welcome” and he doesn’t want to “frighten off” investors.

The US needs to learn from foreign experts of fields like shipbuilding, chipmaking and computing, Trump said on his Truth Social platform on Sunday.

“We welcome them, we welcome their employees, and we are willing to proudly say we will learn from them, and do even better than them at their own ‘game,’ sometime in the not too distant future,” he said.



Source link

Continue Reading

Business

OpenAI Execs on the 3 Things Companies Need to Get Right When Using AI

Published

on


The executives leading the product and engineering efforts for OpenAI’s developer platform say companies must adopt a result-oriented approach to successfully roll out AI to employees.

Olivier Godement and Sherwin Wu head product and engineering for OpenAI’s developer platform, respectively. During an interview on the BG2 podcast that aired Thursday, Godement and Wu shared three tips on how companies can integrate AI.

“Number one is the interesting combination of top-down buy-in and enabling a very clear group, like a tiger team,” Godement said. He added that the team could be a mix of staff from AI providers like OpenAI, as well as the company’s own employees.

Godement said members of the “tiger team” should possess either technical skills or a deep understanding of the company’s processes.

“In the enterprise, like customer support, what we found is that the vast majority of the knowledge is in people’s heads,” Godement said.

“Unless you have that tiger team, a mix of technical and subject matter experts, it’s really hard to get something out of the ground,” he added.

Next, Godement and Wu said companies need to develop clear benchmarks, or what they call “evals,” to track their progress with AI.

“Evals are much harder than what it looks to get done,” Godement said.

“Evals, oftentimes, need to come bottom-up, because all of these things are kind of in people’s heads, in the actual operator’s heads. It’s actually very hard to have a top-down mandate,” Wu said.

Lastly, Godement said that companies should monitor their benchmarks closely and strive to make progress against them.

“A lot of that is like art sometimes, more than science,” he said.

Progress can be achieved by having a good understanding of an AI model’s design, behavior, and constraints, Godement said.

“Sometimes, we even need to fine-tune ourselves the models, when there are some clear limitations, and you know, being patient, getting you way up there and then, ship,” he added.

Godement said it was important for a company’s top leadership to make AI a priority and give their staff the opportunity to experiment.

“Letting the team organize and be like, ‘OK, if you want to start small, start small, and then you can scale it up.’ That would be number 1,” he added.

OpenAI did not respond to a request for comment from Business Insider.

Tech CEOs have been stepping up their efforts when it comes to getting their employees to use AI.

Duolingo CEO Luis von Ahn said in an August interview with The New York Times that Duolingo has been organizing weekly activities to encourage teams to use AI.

“Every Friday morning, we have this thing: It’s a bad acronym, f-r-A-I-days,” von Ahn told the Times.

Howie Liu, the CEO of the vibe coding platform Airtable, said in an episode of Lenny’s Podcats that aired last month that he wants his staff to experiment with AI, even if it entails taking time off work.

“If you want to cancel all your meetings for a day or for an entire week and just go play around with every AI product that you think could be relevant to Airtable, go do it. Period,” Liu said.





Source link

Continue Reading

Business

EY-Parthenon Unveils Neurosymbolic AI To Enable Business Revenue Growth

Published

on


Ernst & Young LLP (EY) has launched EY Growth Platforms (EYGP), an artificial intelligence enabled solution powered by neurosymbolic AI.

Announced recently, this update from the EY-Parthenon practice aims to enable businesses to identify untapped opportunities, predict market shifts, and unlock revenue at scale.

As economic uncertainties persist, EYGP positions itself as a digital tool for professionals seeking to redefine commercial models and drive sustainable profitability.

At the heart of EYGP is neurosymbolic AI, a hybrid approach that merges the probabilistic pattern recognition of neural networks with the precise, rule-based logic of symbolic reasoning.

Unlike traditional generative AI, which often excels at content creation but falls short on explainable decisions, neurosymbolic AI delivers actionable insights that are said to be grounded in real-world logic.

This combination allows for predictions that are not only accurate but also transparent and traceable, addressing a need in enterprise decision-making.

Jeff Schumacher said:

“Neurosymbolic AI is not another analytics tool; it’s a growth engine.”

He is the newly appointed EYGP Leader for EY-Parthenon.

Schumacher, who founded Growth Protocol—the proprietary technology EY has exclusively licensed—brings a wealth of expertise in business strategy and innovation to spearhead this initiative.

EYGP functions as a unified data and reasoning engine, ingesting vast amounts of structured and unstructured data from internal systems, external market signals, and EY’s extensive proprietary datasets.

Developed over several years, the platform simulates real-time market scenarios to generate tailored business strategies without the hassle of extensive data cleaning or digital overhauls.

It processes information through intelligent workflows that blend statistical analysis with logical inference, uncovering patterns that traditional methods might miss.

This enables companies to pivot quickly, optimizing everything from go-to-market strategies to high-stakes transactions.

The benefits for businesses are seemingly significant.

In an era where growth demands agility, EYGP helps organizations reimagine their revenue trajectories by identifying hundred-million-dollar opportunities and scaling new ventures.

It tackles complex challenges like building corporate innovation labs or executing mergers with predictive foresight, all while ensuring decisions are statistically sound and compliant.

Mitch Berlin, EY Americas Vice Chair for EY-Parthenon, emphasized this potential:

“In today’s uncertain economic climate, leading companies aren’t just adapting—they’re taking control. EY Growth Platforms gives our clients the predictive power and actionable foresight they need to confidently steer their revenue trajectory.

This is potentially a game changer, poised to become the backbone of enterprise growth.

”Real-world applications span diverse industries. In financial services, EYGP enhances underwriting and claims processing with transparent AI that aligns with regulatory standards, optimizing customer outcomes while minimizing risks.

For consumer products companies, it powers hyperpersonalized experiences—think real-time product recommendations, adaptive user interfaces, and location-based services—by analyzing individual behaviors and preferences in context.

In the industrial sector, the platform optimizes supply chains from sourcing to distribution, integrating domain knowledge with operational data to inform decisions on facility placement, logistics routing, and workforce allocation tailored to specific markets.

Deployed for EY-Parthenon clients in consumer goods, industrials, and financial services, EYGP is available in North America, Europe, and Australia.

This launch underscores EY‘s commitment to blending human expertise with AI, fostering trust in an increasingly automated environment (where it has become increasingly difficult to reliably distinguish between AI powered activity and the finer touches of human interaction).

And as businesses grapple with volatility, tools like EYGP could mark a pivotal shift, turning data into dollars and uncertainty into potential opportunities.





Source link

Continue Reading

Trending