Connect with us

Tools & Platforms

G7 leaders reaffirm support for responsible AI deployment | Insights

Published

on


As artificial intelligence (AI) transforms every sector of the global economy, the leaders of the G7 nations, at the 2025 G7 Summit in Kananaskis, Alberta, issued a strong, unified statement reaffirming their commitment to ensuring this transformation benefits people, promotes inclusive prosperity and supports responsible innovation. In their “G7 leaders’ statement on AI for prosperity”, the world’s leading democracies laid out a roadmap for adopting trustworthy AI at scale – balancing economic opportunity with ethical stewardship and energy sustainability. 

From awareness to adoption: Public sector AI with purpose 

One of the core pillars of the G7’s new vision is leveraging AI in the public sector. Governments are being called to not only regulate AI but to actively use it to improve public services, drive efficiency and better respond to citizens’ needs – all while maintaining privacy, human rights and democratic values. 

To lead this effort, Canada, in its role as G7 president, has announced the GovAI Grand Challenge. This initiative includes a series of “rapid solution labs” that will develop creative, practical AI solutions to accelerate public sector transformation. These efforts will be coordinated through the to-be-established G7 AI Network (GAIN), which will connect expertise across member countries and curate a catalogue of open-source, shareable AI tools. Additional details on these programs are forthcoming. 

Empowering SMEs to compete in the AI economy 

The G7 leaders also acknowledged a key truth: small- and medium-sized enterprises (SMEs) are the lifeblood of modern economies. These businesses generate jobs, drive innovation and build resilient local economies. Yet they often face significant barriers to AI adoption, from lack of access to computing infrastructure to gaps in digital skills. 

To close this gap, the G7 launched the AI Adoption Roadmap – a practical guide to help businesses, particularly SMEs, move from understanding AI to implementing it. The roadmap includes: 

  • Sustained investment in AI readiness programs for SMEs 
  • A blueprint for scalable, proven adoption strategies 
  • Cross-border talent exchanges to boost in-house AI capabilities 
  • New trust-building tools to give businesses and consumers confidence in AI systems 

This comprehensive approach is designed to help SMEs not only catch up but leap ahead – adopting AI in ways that are ethical, productive and secure. 

To support this initiative, and as part of the broader $2-billion Canadian Sovereign AI Compute Strategy, on June 25, 2025, the Government of Canada announced a fund that will support Canadian SMEs in accessing high-performance compute capacity to develop made-in-Canada AI products and solutions. Applications for the AI Compute Access Fund can now be submitted.   

A workforce ready for the AI era 

The shift to an AI-powered economy will demand a new kind of workforce. The G7 leaders reaffirmed their support for the 2024 Action Plan for safe and human-centered AI in the workplace. This includes investing in AI literacy and job transition programs, especially for those in sectors likely to be most affected. 

Crucially, the G7 also emphasized equity and inclusion – particularly encouraging girls and underrepresented communities to pursue STEM education and grow their presence in the AI talent pipeline. As AI reshapes our economies, building a diverse and resilient workforce is not only a moral imperative but an economic one. 

Tackling the energy footprint of AI 

With the exponential growth of large AI models comes a steep rise in energy consumption. The G7 acknowledged the environmental toll and vowed to address it head-on. In a first-of-its-kind commitment, member nations will work together on a comprehensive workplan on AI and energy, due by the end of 2025. 

This work will focus on developing energy-efficient AI systems, optimizing data center operations and using AI itself to drive clean energy innovation. The goal: ensure that the AI revolution doesn’t come at the cost of our planet – but instead helps to preserve it. 

Partnering for global inclusion 

Finally, the G7 turned their focus outward to the developing world, where digital divides threaten to leave billions behind. Leaders committed to expanding AI access in emerging markets through trusted technology, targeted investment and local collaboration. 

From the AI for Development Funders Collaborative to partnerships with universities and international organizations, the G7 aims to build mutually beneficial partnerships that bridge capacity gaps and support locally driven AI innovation. 

The technology, intellectual property and privacy group at MLT Aikins are tracking developments in the regulation, governance and deployment of AI in today’s modern economy and can give you the advice you need to navigate the ever changing world of AI. 

Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation. 

Share



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Empowering, not replacing: A positive vision for AI in executive recruiting

Published

on


Image courtesy of Terri Davis

Tamara is a thought leader in Digital Journal’s Insight Forum (become a member).


“So, the biggest long‑term danger is that, once these artificial intelligences get smarter than we are, they will take control — they’ll make us irrelevant.” — Geoffrey Hinton, Godfather of AI

Modern AI often feels like a threat, especially when the warnings come from the very people building it. Sam Altman, the salesman behind ChatGPT (not an engineer, but the face of OpenAI and someone known for convincing investors), has said with offhand certainty, as casually as ordering toast or predicting the sun will rise, that entire categories of jobs will be taken over by AI. That includes roles in health, education, law, finance, and HR.

Some companies now won’t hire people unless AI fails at the given task, even though these models hallucinate, invent facts, and make critical errors. They’re replacing people with a tool we barely understand.

Even leaders in the field admit they don’t fully understand how AI works. In May 2025, Dario Amodei, CEO of Anthropic, said the quiet part out loud:

“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned. This lack of understanding is essentially unprecedented in the history of technology.”

In short, no one is fully in control of AI. A handful of Silicon Valley technocrats have appointed themselves arbiters of the direction of AI, and they work more or less in secret. There is no real government oversight. They are developing without any legal guardrails. And those guardrails may not arrive for years, by which time they may be too late to have any effect on what’s already been let out of Pandora’s Box. 

So we asked ourselves: Using the tools available to us today, why not model something right now that can in some way shape the discussion around how AI is used? In our case, this is in the HR space. 

What if AI didn’t replace people, but instead helped companies discover them?

Picture a CEO in a post-merger fog. She needs clarity, not another résumé pile. Why not introduce her to the precise leader she didn’t know she needed, using AI? 

Instead of turning warm-blooded professionals into collateral damage, why not use AI to help, thoughtfully, ethically, and practically solve problems that now exist across the board in HR, recruitment, and employment? 

An empathic role for AI

Most job platforms still rely on keyword-stuffed resumés and keyword matching algorithms. As a result, excellent candidates often get filtered out simply for using the “wrong” terms. That’s not just inefficient, it’s fundamentally malpractice. It’s hurting companies and candidates. It’s an example of technology poorly applied, but this is the norm today. 

Imagine instead a platform that isn’t keyword driven, that instead guides candidates through discovery to create richer, more dimensional profiles that showcase unique strengths, instincts, and character that shape real-world impact. This would go beyond skillsets or job titles to deeper personal qualities that differentiate equally experienced candidates, resulting in a better fitted leadership candidate to any given role.

One leader, as an example, may bring calm decisiveness in chaos. Another may excel at building unity across silos. Another might be relentless at rooting out operational bloat and uncovering savings others missed.

A system like this that helps uncover those traits, guides candidates to articulate them clearly, and discreetly learns about each candidate to offer thoughtful, evolving insights, would see AI used as an advocate, not a gatekeeping nemesis.

For companies, this application would reframe job descriptions around outcomes, not tasks. Instead of listing qualifications, the tool helps hiring teams articulate what they’re trying to achieve: whether it’s growth, turnaround, post-M&A integration, or cost efficiency, and then finds the most suitable candidate match. 

Fairness by design

Bias is endemic in HR today: ageism, sexism, disability, race. Imagine a platform that actively discourages bias. Gender, race, age, and even profile photos are optional. The system doesn’t reward those who include a photo, unlike most recruiting platforms. It doesn’t penalize those who don’t know how to game a résumé.

Success then becomes about alignment. Deep expertise. Purposeful outcomes.

This design gives companies what they want: competence. And gives candidates what they want: a fair chance.

This is more than an innovative way to use current AI technology. It’s a value statement about prioritizing people.

Why now

We’re at an inflection point.

Researchers like Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean forecast in AI 2027 that superhuman AI (AGI, then superintelligence) will bring changes in the next decade more disruptive than the Industrial Revolution.

If they’re even a little right, then the decisions being made today by a small circle in Silicon Valley will affect lives everywhere.

It’s important to step into the conversation now to help shape AI’s real-world role. The more human-centred, altruistic, practical uses of AI we build and model now, the more likely these values will help shape laws, norms, and infrastructure to come.

This is a historic moment. How we use AI now will shape the future. 

People-first design

Every technology revolution sparks fear. But this one with AI is unique. It’s the first since the Industrial Revolution where machines are being designed to replace people as an explicit goal. Entire roles and careers may vanish.

But that isn’t inevitable either. It’s a choice. 

AI can be built to assist, not erase. It can guide a leader to their next opportunity. It can help a CEO find a partner who unlocks transformation. It can put people out front, not overshadow them. 

We invite others in talent tech and AI to take a similar stance. Let’s build tools for people. Let’s avoid displacement and instead elevate talent. Let’s embed honesty, fairness, clarity, and alignment in everything we make. 

We don’t control the base models. But we do control how we use them. And how we build with them.

AI should amplify human potential, not replace it. That’s the choice I’m standing behind. 



Source link

Continue Reading

Tools & Platforms

ABA ethics opinion addresses jury selection discrimination from consultants and AI technology

Published

on


Ethics

ABA ethics opinion addresses jury selection discrimination from consultants and AI technology

When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility. (Image from Shutterstock)

When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility.

That also applies to client directives, as well as guidance from jury consultants or AI software, according to Formal Opinion 517, published Wednesday.

Such conduct violates Model Rule 8.4(g), which prohibits harassment and discrimination in the practice of law based on “race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status.”

A lawyer does not violate Rule 8.4(g) by exercising peremptory challenges on a discriminatory basis where not forbidden by other law, according to the opinion.

The U.S. Supreme Court explained such conduct violates the Equal Protection Clause of the 14th Amendment in Batson v. Kentucky (1986) and J.E.B. v. Alabama ex rel. T.B. (1994). In Batson, a lawyer struck a series of Black jurors in a criminal trial. In J.E.B., a lawyer struck a series of males in a paternity child support action.

The ethics opinion addresses when a Batson-type violation also constitutes professional misconduct under Rule 8.4(g).

Seemingly, if a lawyer commits such a violation, the lawyer also runs afoul of Rule 8.4(g). After all, in both settings the lawyer has engaged in a form of racial discrimination.

“Striking prospective jurors on discriminatory bases in violation of substantive law governing juror selection is not legitimate advocacy. Conduct that has been declared illegal by the courts or a legislature cannot constitute “legitimate advocacy,” the ethics opinion states.

However, Comment [5] to the model rule provides that a trial judge finding a Batson violation alone does not establish running afoul of 8.4.

The comment, according to the ethics opinion, gives “guidance on the evidentiary burden in a disciplinary proceeding.”

For example, in a disciplinary hearing a lawyer may be able to offer “a more fulsome explanation” for why they struck certain jurors. Furthermore, there is a “higher burden of proof” in lawyer discipline proceedings.

The ethics opinion also explains that a lawyer violates Rule 8.4(g) only if they know or reasonably should have known that the exercise of the peremptory challenges were unlawful. The lawyer may genuinely believe they had legitimate, nondiscriminatory reasons for striking certain jurors—such as their age, whether they paid attention during the jury selection process or something else.

According to the opinion, the question then centers on “whether ‘a lawyer of reasonable prudence and competence’ would have known that the challenges were impermissible.”

Also, the opinion addresses the difficult question of what if a client or jury consultant offers nondiscriminatory reasons for striking certain jurors and the lawyer follows such advice. Here, a reasonably competent and prudent lawyer should know whether the client or jury consultant’s reasons were pretextual or were legitimate.

Additionally, the opinion addresses a scenario where an AI-generated program ranks prospective jurors and applies those rankings, unknown to the lawyer, in a discriminatory manner. Lawyers should use “due diligence to acquire a general understanding of the methodology employed by the juror selection program,” the opinion states.

A July 9 ABA press release is here.





Source link

Continue Reading

Tools & Platforms

Why Coca-Cola CIO Neeraj Tolmare prioritizes big-impact AI pilot projects

Published

on


Neeraj Tolmare, the chief information officer at Coca-Cola, says the artificial intelligence pilots that he invests in only get authorized if the beverage giant can envision revenue generation or big efficiency gains once fully in production.

This strategy reflects a mandate from Coke’s CEO James Quincey and the broader leadership team that demands that AI experimentation “has to be tied to a real outcome that moves the needle for us,” says Tolmare. “We are all about scale.” 

That scale is massive for Coca-Cola, which is ranked 97th on the Fortune 500. Each AI bet by Tolmare has to weigh cost of implementation, as well as how the technology may be used internally across various functions ranging from software development to sales. On top of that, he must weigh how new technology advancements like agentic AI will change workflows and how data is shared within Coke and close key external partners, including the 200 bottlers and 950 production facilities that make up the entire system that Coke relies on to serve 2.2 billion beverages each day.

One pilot that Tolmare is particularly excited about is the development of an algorithm that Coke developed to help retail outlets better predict demand, using AI to ensure that their shelves are always appropriately stocked.

In the past, sales agents would visit stores every few weeks and at times, find themselves facing half-empty coolers. Coke had to rely on historical retail scan data to try to anticipate future demand. But with AI, the company was able to create an algorithm that triangulates historical data with weather patterns, which is an important consideration for beverage purchases, and geolocation data from Google to more precisely predict future sales. 

Those AI insights inform the messages Coca-Cola sends via WhatsApp to managers, advising them on when to stock more Sprite or Diet Coke before they run out. The pilot was tested in three countries and saw sales increase by 7% to 8% versus outlets that weren’t using the AI algorithm, according to Tolmare, who intends to broaden the application of this AI tool to more markets globally.

Another investment with big reach is Coke’s usage of AI for content creation. The beverage giant sells in more than 180 countries and creates marketing assets in over 130 languages. It’s a process that’s both time consuming and costly, because Coke has to adjust the materials to reflect not only language, but also cultural sensibilities. Coke recently created 20 AI-generated assets, all based on the company’s own proprietary intellectual property, and then created 10,000 variations to be used across many different languages and geographies.

Tolmare asserts that consumers are 20% more likely to engage with this content than prior iterations that weren’t crafted by AI, and that the AI-produced content was three times faster to generate. 

In both cases, he stresses the importance of keeping humans in the loop. “AI has proven that it can unlock value that a human being would not be able to unlock because the computation needed to mine through all this data and bring something meaningful is so complex,” says Tolmare, in regards to the algorithm to support retail sales.

With marketing, AI’s application is a bit trickier and Coke is still figuring out the right approach. An AI-generated Christmas spot generated some backlash last year and in April, a different campaign made a mistake featuring a quote from a book that novelist J.G. Ballard never wrote. Tolmare says Coke has strict guidelines in place to ensure that AI-generated content avoids social biases that would be harmful to consumers and is keenly aware of hallucinations or deepfakes that can occur when generating creative content with AI tools.

“That’s not to say that we won’t ever fall into it,” admits Tolmare. “But we have a mechanism in place that allows us to react in a responsible way if and when that happens.”

Most of Tolmare’s career has been in the technology industry, including working for gadget manufacturer Palm, which ultimately lost out on an effort to compete with Apple in the smartphone market. He also previously served as a director at networking-equipment company Cisco Systems and as a VP at HP during the years in which it split up the personal-computer and printer businesses from the corporate hardware and services arm. 

“I’ve spent more than two decades working for companies that develop and manufacture technology that the rest of the world uses,” says Tolmare. He joined Coca-Cola in 2018 to work at a company that consumes technology, rather than developing it, to grow their business.

Tolmare spearheaded the company’s all-in bet on cloud computing, retiring or selling all of Coke’s physical data centers. Roughly 80% of the company’s footprint is in Microsoft Azure with the rest split between Amazon Web Services and Google Cloud. “We continuously monitor which one is going to give us better efficiency by moving a workload from one cloud to another,” says Tolmare.

His hybrid cloud playbook is also applicable to how Coke thinks about generative AI experimentation. Coke works closely with Microsoft and its strategic partner OpenAI, but also taps other AI hyperscalers including Google, Meta, and Anthropic. “We don’t want to paint ourselves into a corner too soon,” he says. “The market itself is not consolidating.”

What’s next on deck is further exploration with agentic AI, systems that are designed to perform tasks autonomously or with little human intervention. The company is considering offerings from vendors including Microsoft, SAP, and Adobe, while developing its own AI agents trained on Coke’s data. All of these AI agents remain in the pilot phase, because Tolmare still wants to determine if these systems can be built at a cost effective price, while achieving the optimal business outcomes.

“We haven’t launched agentic in production yet, but we are very close,” says Tolmare. “I’m fascinated by what this can do for our business.”

John Kell

Send thoughts or suggestions to CIO Intelligence here.

NEWS PACKETS

Tech giants offer more discounts to the federal government. The Wall Street Journal reports that Oracle is cutting the cost of the company’s database software and cloud-computing service for the federal government. The discount on cloud infrastructure is the first of its kind for the entire government and could be replicated with other cloud providers in negotiations that are currently underway. Other recent price discounts that the General Services Administration were able to extract from vendors include a deal cut with Salesforce to trim the price of messaging app Slack by 90% through the end of November and lower costs from Google and Adobe. The GSA has opted to negotiate these savings through direct engagement with the tech giants, rather than discussing pricing with third parties. “Through procurement consolidation, we’re aiming to bring the leverage of the whole, commanding purchasing power of the federal wallet to these [technology providers] to get the best discounts for the taxpayers,” Federal Acquisition Service Commissioner Josh Gruenbaum told WSJ.

Ford CEO predicts AI will replace half of U.S. white-collar workers. CEOs have increasingly shared their dour prognostication of AI’s impact on the future of work and the latest diagnosis came from Jim Farley, the CEO of automaker Ford. Speaking at the Aspen Ideas Festival, Farley was making the case that the education system is too focused on pushing students toward a four-year college degree, which may be counterintuitive, given his view that “AI is gonna replace literally half of all white-collar workers in the U.S.” Farley makes a pitch for more investment in trade workers, those who may take a job on an auto factory floor or in construction, where there is a shortage of workers. He may have a point, as tech giants continue to slice jobs, with Microsoft earlier this month cutting another 9,000 jobs, bringing the grand total of jobs eliminated by the tech giant to 15,000 this year.

But, IT unemployment dropped to its lowest level this year in June. Much of the consternation about AI’s impact on work has been focused on tech jobs, especially as big employers like Microsoft and TikTok continue to generate national headlines when announcing layoffs. That said, across all sectors, businesses are on the hunt for talent and have been hiring for software developers and engineers, systems engineers and architects, and cybersecurity professionals, according to CIO Dive, which reported on insights from CompTIA. The IT trade organization reported that IT unemployment fell to 2.8% in June, as companies across all sectors added 90,000 net new tech jobs. Active employer job listings for tech positions reached 455,341 in June, with 47% of that total newly added in June. But the tech industry itself continues to be a weak point, as that sector reduced staffing by a net 7,256 positions in June.

Google parent company’s AI-powered drug firm gearing up for clinical trials. Alphabet’s drug discovery arm, Isomorphic Labs, is getting close to testing AI-designed drugs in humans, Fortune reports, as company president Colin Murdoch says “the next big milestone is actually going to clinical trials, starting to put these things into human beings. We’re staffing up now. We’re getting very close.” Isomorphic Labs, which was spun out of Google’s AI research lab DeepMind in 2021, raised $600 million in April to help propel its goal of combining the expertise of machine learning researchers and pharma veterans to design new treatments at a faster pace, more cheaply, and with a higher rate of success. Isomorphic supports existing pharma drug discovery programs, having already inked research collaborations with Novartis and Eli Lilly, and also develops its own internal drug candidates in areas such as oncology and immunology.

ADOPTION CURVE

Why CIOs need to focus on both data quality and permission. A new survey found that while 52% of companies are “highly reliant” on the consented data that’s used to craft personalized marketing materials and product development, there are still roadblocks as it pertains to how enterprises use this data. That’s important to get right if enterprises want to retain trust with the consumers that authorize the use of their data, while also adhering to state laws in places like California, which has given individuals greater control over their personal information.

Some big issues that CIOs face related to consent requirements include a lack of visibility into data flows (61%), followed closely by difficulty tracking user preferences across channels (59%). They’ll need to solve these problems quickly as 100% of the 265 survey respondents say that consented consumer data is foundational to their ability to succeed when it comes to pursuing AI initiatives. 

Kate Parker, CEO of Transcend, the data privacy management software provider that funded the study, tells Fortune that the findings highlight a shift in thinking that’s less focused on data quality and more about permission to use the consumer data in the first place. 

“If CIOs and digital leaders are in agreement that the consent of the data is the fundamental component of AI and personalization, but they also admit very openly that they do not have the infrastructure to solve that in scale, that’s going to be one of the biggest growth blockers,” says Parker.

Courtesy of Transcend

JOBS RADAR

Hiring:

H&H is seeking a CIO, based in New York City. Posted salary range: $200K-$250K/year.

Lease & LaBau is seeking a CIO, based in New York City. Posted salary range: $450K-$500K/year.

Vibrant Emotional Health is seeking a CTO in a remote-based role. Posted salary range: $261K-$350K/year.

Major League Soccer is seeking a chief information security officer, based in New York City. Posted salary range: $200K-$275K/year.

Hired:

Old National Bancorp named Matt Keen as CIO, taking on the C-suite leadership role after the Minneapolis-based bank acquired Bremer Bank, where Keen had served in the same role. Previously, Keen also served as a consultant for American Express as part of his tenure at PriceWaterhouseCoopers. He also previously served as CTO at real estate investment trust Two Harbors Investment Corp.

EProductivity Software Packaging announced the appointment of Scott Brown as CIO, joining the software provider for the packaging and print industries to oversee an expansion of the company’s cloud portfolio, as well as enhance security and compliance across all platforms. Brown joins from software provider Sciforma, where he served as CIO.

Rancher Government Solutions promoted Adam Toy to the role of CTO, where he will steer the strategic direction of the company’s tech portfolio, overseeing platform engineering, architecture, and product development. Toy had served in that role on an interim basis since March and initially joined Rancher, which helps government agencies modernize their IT infrastructure, in 2020 as a senior solutions architect.



Source link

Continue Reading

Trending