Connect with us

AI Insights

Microsoft Lays Off Staff as Savings From AI Top $500 Million

Published

on


Microsoft is ramping up internal use of artificial intelligence (AI) tools to cut costs and increase productivity, even as the company trims thousands of jobs across departments.

According to Bloomberg, Chief Commercial Officer Judson Althoff told employees in a recent presentation that AI is enhancing productivity across functions, including sales, customer service and software development, according to a person familiar with his remarks.

AI helped Microsoft save more than $500 million last year in its call centers alone and improved both employee and customer satisfaction, the person said.

The company is also using AI to manage interactions with smaller clients — an initiative that is still early-stage but already generating tens of millions of dollars in revenue, according to the same person.

Read more: Microsoft’s Nadella: AI Agents Serve as ‘Chiefs of Staff’

At the same time, Microsoft has announced job cuts of about 15,000 so far this year, with the latest round affecting customer-facing roles such as sales. The layoffs have raised concerns about AI displacing workersa trend echoed across the technology sector.

Salesforce has relegated 30% of its internal work to AI, enabling it to reduce hiring for some positions.

Tech isn’t the only industry facing the potential impact of AI in the workplace. Ford, JPMorgan and other companies have warned of the possibility of deep job cuts as AI continues to advance.

Read more: Microsoft to Cut 3% of Workforce While Reducing Management Layers

Althoff said Microsoft’s AI tools, including its Copilot assistant, could make them more effective salespeople. He said each seller is finding more leads, closing deals quicker and generating 9% more revenue with Copilot’s help.

Microsoft said in April that its GitHub Copilot has 15 million users and noted that AI now generates 35% of the code for new products, helping speed up development.

Other technology companies are making similar moves: Executives at Alphabet and Meta have noted that AI is now responsible for writing substantial amounts of code.

Microsoft declined to comment.

Read more: Microsoft Cuts Nearly 9K Jobs in 2025’s 4th Round of Layoffs

 



Source link

AI Insights

Artificial Intelligence Coverage Under Cyber Insurance

Published

on


A small but growing number of cyber insurers are incorporating language into their policies that specifically addresses risks from artificial intelligence (AI). The June 2025 issue of The Betterley Report’s Cyber/Privacy Market Survey identifies at least three insurers that are incorporating specific definitions or terms for AI. This raises an important question for policyholders: Does including specific language for AI in a coverage grant (or exclusion) change the terms of coverage offered?

To be sure, at present few cyber policies expressly address AI. Most insurers appear to be maintaining a “wait and see” approach; they are monitoring the risks posed by AI, but they have not revised their policies. Nevertheless, a few insurers have sought to reassure customers that coverage is available for AI-related events. One insurer has gone so far as to state that its policy “provides affirmative cover for cyber attacks that utilize AI, ensuring that the business is covered for any of the losses associated with such attacks.” To the extent that AI is simply one vector for a data breach or other cyber incident that would otherwise be an insured event, however, it is unclear whether adding AI-specific language expands coverage. On the other side of the coin, some insurers have sought to limit exposure by incorporating exclusions for certain AI events.   

To assess the impact of these changes, it is critical to ask: What does artificial intelligence even mean?

This is a difficult question to answer. The field of AI is vast and constantly evolving. AI can curate social media feeds, recommend shows and products to consumers, generate email auto-responses, and more. Banks use AI to detect fraud. Driving apps use it to predict traffic. Search engines use it to rank and recommend search results. AI pervades daily life and extends far beyond the chatbots and other generative AI tools that have been the focus of recent news and popular attention.

At a more technical level, AI also encompasses numerous nesting and overlapping subfields.  One major subfield, machine learning, encompasses techniques ranging from linear regression to decision trees. It also includes neural networks, which, when layered together, can be used to power the subfield of deep learning. Deep learning, in turn, is used by the subfield of generative AI. And generative AI itself can take different forms, such as large language models, diffusion models, generative adversarial networks, and neural radiance fields.

That may be why most insurers have been reluctant to define artificial intelligence. A policy could name certain concrete examples of AI applications, but it would likely miss many others, and it would risk falling behind as AI was adapted for other uses. The policy could provide a technical definition, but that could be similarly underinclusive and inflexible. Even referring to subsets such as “generative AI” could run into similar issues, given the complex techniques and applications for the technology.

The risk, of course, is that by not clearly defining artificial intelligence, a policy that grants or excludes coverage for AI could have different coverage consequences than either the insurer or insured expected. Policyholders should pay particular attention to provisions purporting to exclude loss or liability from AI risks, and consider what technologies are in use that could offer a basis to deny coverage for the loss. We will watch with interest cyber insurers’ approach to AI — will most continue to omit references to AI, or will more insurers expressly address AI in their policies?

Listen to this article

This article was co-authored by Anna Hamel



Source link

Continue Reading

AI Insights

Code Green or Code Red? The Untold Climate Cost of Artificial Intelligence

Published

on


As the world races to harness artificial intelligence, few are pausing to ask a critical question: What is AI doing to the planet?

AI is being heralded as a game-changer in the global fight against climate change. AI is already assisting scientists in modeling rising temperatures and extreme weather phenomena, enabling decision-making bodies to predict and prepare for unexpected weather, while allowing energy systems to become smarter and more efficient. According to the World Economic Forum, AI has the potential to contribute up to 5.1 trillion dollars annually to the global economy, under the condition that it is deployed sustainably during the climate transition (WEF, 2025).

Beneath the sleek interfaces and climate dashboards lies a growing environmental cost. The widespread use of generative AI, in particular, is creating a new carbon frontier, one that we’re just beginning to untangle and understand.

Training large-scale AI models is energy-intensive, according to a 2024 MIT report. Training a single GPT-3 sized model can consume enough electricity to power almost 120 U.S. homes for a year, which totals up to 1.300 megawatt-hours of electricity. AI systems, once deployed, are not static, since they continue to consume energy each time a user interacts with them. For example, an AI-generated image may require as much energy as watching a short video on an online platform, while large language model queries require almost 10 times more energy than a typical Google search (MIT, 2024).

As AI becomes embedded into everything from online search to logistics and social media, this energy use is multiplying at scale. The International Energy Agency (IEA) warns that by 2026, data center electricity consumption could double globally, driven mainly by the rise of AI and cryptocurrency. Taking into account the recent developments regarding the Digital Euro, the discussion instantly receives more value. Without rapid decarbonization of energy grids, this could significantly increase global emissions, undermining progress on climate goals (IEA,2024).

Sotiris Anastasopoulos/ With data from the IEA’s official website.

The climate irony is real: AI is both the solution and the multiplier to Earth’s climate challenges.

Still, when used responsibly, AI remains a powerful ally. The UNFCCC’s 2023 “AI for Climate Action” roadmap outlines dozens of promising, climate-friendly applications. AI can detect deforestation from satellite imagery, track methane leaks, help decarbonize supply chains, and forecast the spread of wildfires. In agriculture, AI systems can optimize irrigation and fertilizer use, helping reduce emissions and protect soil. In the energy sector, AI enables real-time management of grids, integrating variable sources like solar and wind while improving reliability. But to unlock this potential, the conversation around AI must evolve, from excitement about its capabilities to accountability for its impact.

This starts with transparency. Today, few AI developers publicly report the energy or emissions cost of training and running their models. That needs to change. The IEA calls for AI models to be accompanied by “energy use disclosures” and impact assessments. Governments and regulators should enforce such standards, similarly to industrial emissions or vehicle efficiency (UNFCC, 2023).

Second, green infrastructure must become the default. Data centers must be powered by renewable energy, not fossil fuels. AI models must be optimized for efficiency, not just performance. Instead of racing toward ever-larger models, we should ask what the climate cost of model inflation is and if it’s worth it (UNFCC, 2023).

Third, we need to question the uses of AI itself. Not every application is essential. Does society actually benefit from energy-intensive image generation tools for trivial entertainment or advertising? While AI can accelerate climate solutions, it can also accelerate consumption, misinformation, and surveillance. A climate-conscious AI agenda must weigh trade-offs, not just celebrate innovation (UNFCC,2023).

Finally, equity matters. As the UNFCC report emphasizes, the AI infrastructure powering the climate transition is heavily concentrated in the Global North. Meanwhile, the Global South, home to many of the world’s most climate-vulnerable populations, lacks access to these tools, data, and services. An inclusive AI-climate agenda must invest in capacity-building, data access,  and technological advancements to ensure no region is left behind (UNFCC, 2023).

Artificial intelligence is not green or dirty by its nature. Like all tools, its impact depends on how and why we use it. We are still early in the AI revolution to shape its trajectory, but not for long.

The stakes are planetary. If deployed wisely, AI could help the transition to a net-zero future. If mismanaged, it risks becoming just another accelerant of a warming world.

Technology alone will not solve the climate crisis. But climate solutions without responsible technology are bound to fail.

*Sotiris Anastasopoulos is a student researcher at the Institute of European Integration and Policy of the UoA. He is an active member of YCDF and AEIA and currently serves as a European Climate Pact Ambassador. 

This op-ed is part of To BHMA International Edition’s NextGen Corner, a platform for fresh voices on the defining issues of our time



Source link

Continue Reading

AI Insights

AI hallucination in Mike Lindell case serves as a stark warning : NPR

Published

on


MyPillow CEO Mike Lindell arrives at a gathering of supporters of Donald Trump near Trump’s residence in Palm Beach, Fla., on April 4, 2023. On July 7, 2025, Lindell’s lawyers were fined thousands of dollars for submitting a legal filing riddled with AI-generated mistakes.

Octavio Jones/Getty Images


hide caption

toggle caption

Octavio Jones/Getty Images

A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn’t exist.

Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February filled with more than two dozen mistakes — including hallucinated cases, meaning fake cases made up by AI tools, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday.

“Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” Wang wrote in her decision. “Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice.”

The use of AI by lawyers in court is not, itself illegal. But Wang found the lawyers violated a federal rule that requires lawyers to certify that claims they make in court are “well grounded” in the law. Turns out, fake cases don’t meet that bar.

Kachouroff and DeMaster didn’t respond to NPR’s request for comment.

The error-riddled court filing was part of a defamation case involving Lindell, the MyPillow creator, President Trump supporter and conspiracy theorist known for spreading lies about the 2020 election. Last month, Lindell lost this case being argued in front of Wang. He was ordered to pay Eric Coomer, a former employee of Denver-based Dominion Voting Systems, more than $2 million after claiming Coomer and Dominion used election equipment to flip votes to former President Joe Biden.

The financial sanctions, and reputational damage, for the two lawyers are a stark reminder for attorneys who, like many others, are increasingly using artificial intelligence in their work, according to Maura Grossman, a professor at the University of Waterloo’s David R. Cheriton School of Computer Science and an adjunct law professor at York University’s Osgoode Hall Law School.

Grossman said the $3,000 fines “in the scheme of things was reasonably light, given these were not unsophisticated lawyers who just really wouldn’t know better. The kind of errors that were made here … were egregious.”

There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases, Grossman said. It’s become a familiar trend in courtrooms across the country: Lawyers are sanctioned for submitting motions and other court filings filled with case citations that are not real and created by generative AI.

Damien Charlotin tracks court cases from across the world where generative AI produced hallucinated content and where a court or tribunal specifically levied warnings or other punishments. There are 206 cases identified as of Thursday — and that’s only since the spring, he told NPR. There were very few cases before April, he said, but for months since there have been cases “popping up every day.”

Charlotin’s database doesn’t cover every single case where there is a hallucination. But he said, “I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it’s very embarrassing for everyone involved.”

What went wrong in the MyPillow filing

The $3,000 fine for each attorney, Judge Wang wrote in her order this week, is “the least severe sanction adequate to deter and punish defense counsel in this instance.”

The judge wrote that the two attorneys didn’t provide any proper explanation of how these mistakes happened, “most egregiously, citation of cases that do not exist.”

Wang also said Kachouroff and DeMaster were not forthcoming when questioned about whether the motion was generated using artificial intelligence.

Kachouroff, in response, said in court documents that it was DeMaster who “mistakenly filed” a draft version of this filing rather than the right copy that was more carefully edited and didn’t include hallucinated cases.

But Wang wasn’t persuaded that the submission of the filing was an “inadvertent error.” In fact, she called out Kachouroff for not being honest when she questioned him.

“Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence,” Wang wrote.

Grossman advised other lawyers who find themselves in the same position as Kachouroff to not attempt to cover it up, and fess up to the judge as soon as possible.

“You are likely to get a harsher penalty if you don’t come clean,” she said.

An illustration picture shows ChatGPT artificial intelligence software, which generates human-like conversation, in February 2023 in Lierde, Belgium. Experts say AI can be incredibly useful for lawyers — they just have to verify their work.

An illustration picture shows ChatGPT artificial intelligence software, which generates human-like conversation, in February 2023 in Lierde, Belgium. Experts say AI can be incredibly useful for lawyers — they just have to verify their work.

Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images


hide caption

toggle caption

Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images

Trust and verify

Charlotin has found three main issues when lawyers, or others, use AI to file court documents: The first are the fake cases created, or hallucinated, by AI chatbots.

The second is AI creates a fake quote from a real case.

The third is harder to spot, he said. That’s when the citation and case name are correct but the legal argument being cited is not actually supported by the case that is sourced, Charlotin said.

This case involving the MyPillow lawyers is just a microcosm of the growing dilemma of how courts and lawyers can strike the balance between welcoming life-changing technology and using it responsibly in court. The use of AI is growing faster than authorities can make guardrails around its use.

It’s even being used to present evidence in court, Grossman said, and to provide victim impact statements.

Earlier this year, a judge on a New York state appeals court was furious after a plaintiff, representing himself, tried to use a younger, more handsome AI-generated avatar to argue his case for him, CNN reported. That was swiftly shut down.

Despite the cautionary tales that make headlines, both Grossman and Charlotin view AI as an incredibly useful tool for lawyers and one they predict will be used in court more, not less.

Rules over how best to use AI differ from one jurisdiction to the next. Judges have created their own standards, requiring lawyers and those representing themselves in court to submit AI disclosures when it’s been used. In a few instances judges in North Carolina, Ohio, Illinois and Montana have established various prohibitions on the use of AI in their courtrooms, according to a database created by the law firm Ropes & Gray.

The American Bar Association, the national representative of the legal profession, issued its first ethical guidance on the use of AI last year. The organization warned that because these tools “are subject to mistakes, lawyers’ uncritical reliance on content created by a [generative artificial intelligence] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties.”

It continued, “Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation …”

The Advisory Committee on Evidence Rules, the group responsible for studying and recommending changes to the national rules of evidence for federal courts, has been slow to act and is still working on amendments for the use of AI for evidence.

In the meantime, Grossman has this suggestion for anyone who uses AI: “Trust nothing, verify everything.”



Source link

Continue Reading

Trending