Microsoft has reportedly taken a bold step by mandating the use of artificial intelligence across its workforce, instructing managers to assess employees’ use of AI tools – including those from competitors – as part of performance reviews.
“AI is now a fundamental part of how we work,” said Julia Liuson, president of Microsoft’s developer division, in an internal memo seen by Business Insider. “Just like collaboration, data-driven thinking and effective communication, using AI is no longer optional – it’s core to every role and every level.”
The news comes as other companies report growing reliance on AI. In an interview with Bloomberg, Salesforce CEO Marc Benioff recently revealed that AI completes “30 to 50 per cent” of the company’s work. Elsewhere, media company Thomson Reuters has warned employees that those who fail to adopt AI tools risk limited long-term career prospects.
Majority of HR now willing to use AI, study reveals
What impact might AI have on workplace dynamics?
HR’s role in making sure AI is ethical
Legal pitfalls and how AI mandates could backfire
While integrating AI into job expectations may seem like a logical step toward future-proofing the workforce, forcing adoption too quickly could create legal and operational issues, said Elissa Thursfield, founder of HR software and consultancy business HRoes. “Where this could backfire is through the alienation of certain sectors of the workforce who are either reticent in the use of AI or have not upskilled in its usage,” she warned.
Thursfield added that linking performance reviews to AI proficiency without adequate support could expose employers to legal claims: “If an employee were to be dismissed on the basis of poor performance linked to their AI use and they have not been adequately trained, it could result in an unfair dismissal claim if this occurred in England or Wales.
“Microsoft would need to be confident it has fair metrics to be judging staff against before making performance-related decisions.”
Martin Colyer, innovation and AI strategy director at HR consultancy LACE Partners, echoed this concern. “Mandating adoption is always difficult as it can have the opposite effect and backfire… not least on performance, morale and even attrition,” he said.
Colyer highlighted the need to clearly define what ‘good’ AI use looks like across different roles to avoid setting unrealistic or irrelevant expectations.
Building confidence through training and leadership
To avoid implementation failures, Thursfield said AI mandates must be preceded by meaningful investment in training and internal readiness. Without this foundation, efforts to enforce adoption were likely to fall flat.
“Microsoft would have to go through a programme of training and ensure that support has been provided to employees to be confident in issuing the mandate,” explained Thursfield.
Teresa Rose, founder of ConsultHer, added: “Providing space for employees to experiment with AI and develop their abilities is essential before linking this to performance.”
Therefore, creating a supportive environment starts from the top, with strong and empathetic leadership setting the tone. “Adoption here can be encouraged by role modelling, especially in leadership and champion groups,” said Colyer. “It is vital to demonstrate the benefits, encouraging a culture of curiosity, safe experimentation and an ability to ask questions.”
Metrics that matter: avoiding shallow measurement
Even when training is in place, how AI engagement is measured can make or break its effectiveness. Some companies are turning to quantitative goals – for example, law firm Shoosmiths recently linked a £1m bonus pool to daily use of Microsoft’s AI Copilot, targeting four prompts per employee.
But experts warn against using frequency as a proxy for value. “Counting the prompts and not the quality and impact isn’t going to create value,” said Rose.
There are also concerns about operational risk if AI tools are unavailable or unreliable and, with many managers still developing their own AI capabilities, meaningful evaluation may prove difficult, she added: “Managers validating performance would need to have strong AI literacy to do that effectively.”
Ethical and inclusion challenges in AI use
While many managers were still developing their own skills, AI metrics can unintentionally overlook key inclusion and accessibility factors. “Bias, neurodiversity or disability could be factors, as well as digital fluency – not everyone is natively comfortable with new and emerging technologies,” said Colyer.
As a result, Liz Sebag-Montefiore, director and co-founder of 10Eighty, stressed the need for clear standards. “If, as an employer, you want workers to use AI at work, it’s important that you understand the risks and establish standards with regards to sources, citations and privacy laws,” she added.
“Leadership needs to ensure that AI is used ethically and responsibly, and should review the impact on staff and their roles, avoiding disruption while maximising the opportunities on offer.”