Connect with us

AI Research

Microsoft Releases List of Jobs Most and Least Likely to Be Replaced by AI

Published

on


Researchers at Microsoft tried to determine which precise jobs are most and least likely to be replaced by generative AI — and the results are bad news for anyone currently enjoying the perks of a cushy desk job.

As detailed in a yet-to-be-peer-reviewed paper, the Microsoft team analyzed a “dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot,” and found that the occupations most likely to be made obsolete by the tech involve “providing information and assistance, writing, teaching, and advising.”

The team used the data to come up with an “AI applicability score,” an effort to quantify just how vulnerable each given occupation is, taking into consideration how often AI is already being used there and how successful those efforts have been.

According to the analysis, jobs most likely to be replaced include translators, historians, sales reps, writers, authors, and customer service reps. Jobs that are the safest from AI automation, in contrast, include heavy machinery and motorboat operators, housekeepers, roofers, massage therapists, and dishwashers.

In other words, the sweeping takeaway was that lower-paying and manual labor-focused occupations are far less likely to be automated than occupations that suit the expertise of large language model-based AI chatbots.

However, we should take the results with a healthy grain of salt. For one, we should consider that Microsoft employees are incentivized to paint the technology in the best light by the company’s massive investments in the space, which could lead to overstating generative AI’s capabilities.

The researchers also warn that “our data do not indicate that AI is performing all of the work activities of any one occupation,” meaning that for many gigs, AI won’t be able to take over 100 percent of tasks.

Then there’s the fact that “different people use different LLMs for different purposes” and that the nature of many jobs isn’t perfectly represented in the data. That could explain why certain jobs, such as historians, authors, and political scientists, ended up with some of the highest AI applicability scores, despite greatly relying on human intuition and expertise, and having to work with incomplete or contradictory documentation.

That’s not to mention the tech’s propensity to hallucinate made-up factual claims. That’s an inconvenient reality that hangs over the whole paper and the AI industry itself: even if the tech does end up replacing a lot of human jobs, it’s likely it will do so by providing an inferior service that we’ll just have to learn to live with.

The team also cautioned — although again, remember Microsoft’s economic interests — that replacing jobs doesn’t necessarily mean that employment or wages in a sector will decline.

“Our study explores which job categories can productively use AI chatbots,” said Kiran Tomlinson, a Senior Researcher at Microsoft who worked on the research. “It introduces an AI applicability score that measures the overlap between AI capabilities and job tasks, highlighting where AI might change how work is done, not take away or replace jobs.”

“Our research shows that AI supports many tasks, particularly those involving research, writing, and communication, but does not indicate it can fully perform any single occupation,” he continued. “As AI adoption accelerates, it’s important that we continue to study and better understand its societal and economic impact.”

The team also cautioned that “our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive,” in their paper. “Take the example of ATMs, which automated a core task of bank tellers, but led to an increase in the number of bank teller jobs as banks opened more branches at lower costs and tellers focused on more valuable relationship-building rather than processing deposits and withdrawals.”

It’s a common refrain among tech companies that generative AI will lead to the creation of new types of jobs, a convenient conclusion that neatly counters ongoing narratives of imminent job losses.

But not everybody in the industry is trying to soften the blow. Earlier this month, OpenAI CEO Sam Altman warned that entire job categories could be wiped out by AI, with “some areas” in the job market, such as customer support roles, being “just like totally, totally gone.”

And Elijah Clark, a CEO who advises other company leaders on how to make use of AI, recently told Gizmodo that “CEOs are extremely excited about the opportunities that AI brings.”

“As a CEO myself, I can tell you, I’m extremely excited about it,” he added. “I’ve laid off employees myself because of AI.”

In short, while it’s an interesting glimpse into how users are making use of AI chatbots such as Microsoft’s Bing Copilot, it’s difficult to get an accurate picture of what the job market will look like in the years to come.

“Exactly which new jobs emerge, and how old ones are reconstituted, is an important future research direction in the AI age,” the researchers wrote. “At the same time, the technology itself will continue to evolve; our measurement of AI applicability is only a snapshot in time.”

“Modernizing our understanding of workplace activities will be crucial as generative AI continues to change how work is done,” they concluded.

More on AI automation: Engineer: With AI, “Anybody Whose Job Is Done on a Computer All Day Is Over… It’s Just a Matter of Time”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Artificial Intelligence Stocks Rally as Nvidia, TSMC Gain on Oracle Growth Forecast

Published

on


This article first appeared on GuruFocus.

Sep 11 – Oracle (ORCL, Financial) projected its cloud infrastructure revenue will surge to $114 billion by fiscal 2030, a forecast that triggered strong gains across artificial intelligence-related stocks.

The company also outlined plans to spend $35 billion in capital expenditures by fiscal 2026 to expand its data center capacity.

Shares of Oracle soared 36% on Wednesday on the outlook, as investors bet on rising demand for GPU-based cloud services. Nvidia (NASDAQ:NVDA), which supplies most of the chips and systems for AI data centers, climbed 4%. Broadcom (NASDAQ:AVGO), a key networking and custom chip supplier, gained 10%.

Other chipmakers also advanced. Advanced Micro Devices (AMD,) added 2%, while Micron Technology (MU, Financial) increased 4% on expectations for higher memory demand in AI servers. Taiwan Semiconductor Manufacturing Co. (NYSE:TSM), which produces chips for Nvidia and other AI players, rose more than 4% after reporting a 34% jump in August sales.

Server makers Super Micro Computer (SMCI, Financial) and Dell Technologies (DELL) each rose 2%, supported by their role in assembling Nvidia-powered systems. CoreWeave (CRWV), an Oracle rival in the neo-cloud segment, advanced 17% as investors continued to bet on accelerating AI compute demand.



Source link

Continue Reading

AI Research

Oracle Health Deploys AI to Tackle $200B Administrative Challenge

Published

on

By


Oracle Health introduced tools aimed at easing administrative healthcare burdens and costs.

The company’s new artificial intelligence-powered offerings are designed to simplify and lower the cost of processes such as prior authorizations, medical coding, claims processing and determining eligibility, according to a Thursday (Sept. 11) press release.

“Oracle Health is working to solve long-standing problems in healthcare with AI-powered solutions that simplify transactions between payers and providers,” Seema Verma, executive vice president and general manager, Oracle Health and Life Sciences, said in the release. “Our offerings can help minimize administrative complexity and waste to improve accuracy and reduce costs for both parties. With these capabilities, providers can better navigate payer-specific coverage, medical necessity and billing rules while enabling payers to lower administrative workloads by receiving more accurate claims from the start.”

Annual administrative costs tied to healthcare billing and insurance are estimated at roughly $200 billion, the release said. That figure continues to rise, largely due to the complexity of medical and financial processing rules and evolving payment models. The rules and models are time-consuming and inefficient for providers to follow and adopt, so they use manual processes, which make them prone to errors.

The PYMNTS Intelligence report “Healthcare Payments Need Modernization to Drive Financial Health” found that healthcare’s lingering reliance on manual payment systems is proving to be a bottleneck for its financial health and operational efficiency.

The worldwide market for healthcare digital payments is forecast to increase at a compound annual growth rate of 19% between 2024 and 2030, indicating a shift and market opportunity for digital solutions, per the report.

The report also explored how these outdated systems strain revenues and create inefficiencies, contrasting the sector’s slower adoption with other industries that have embraced digital payment tools.

“On the patient side, the benefits are equally compelling,” PYMNTS wrote in June. “Digital transactions offer hassle-free experiences, which are a driver for patient satisfaction and, ultimately, patient retention.”

The research found that 67% of executives and decision-makers in healthcare payer organizations said that their firms’ manual payment platforms were actively hindering efficiency. In addition, 74% said these platforms put their organizations at greater risk for regulatory fines and penalties.



Source link

Continue Reading

AI Research

California Lawmakers Advance Suite of AI Bills

Published

on


As the California Legislature’s 2025 session draws to a close, lawmakers have advanced over a dozen AI bills to the final stages of the legislative process, setting the stage for a potential showdown with Governor Gavin Newsom (D).  The AI bills, some of which have already passed both chambers, reflect recent trends in state AI regulation nationwide, including AI consumer protection frameworks, guardrails for the use of AI in employment and healthcare, frontier model safety requirements, and chatbot safeguards. 

AI Consumer Protection.  California lawmakers are advancing several bills that would impose disclosure, testing, documentation, and other governance requirements for AI systems used to make or assist in decisions that impact consumers.  Like 2024’s Colorado AI Act, California’s Automated Decisions Safety Act (AB 1018) would adopt a cross-sector approach, imposing duties and requirements on developers and deployers of “automated decision systems” (“ADS”) used to make or facilitate employment, education, housing, healthcare, or other “consequential decisions” affecting natural persons.  The bill would require ADS developers and deployers to conduct impact assessments and third-party audits and comply with various disclosure and documentation requirements, and would establish consumer notice, correction, and appeal rights. 

Employment and Healthcare.  SB 7 would establish worker notice, access, and correction rights, prohibited uses, and human oversight requirements for employers that use ADS for employment-related decisions.  Other bills would impose similar restrictions on AI used in healthcare contexts.  AB 489, which passed both chambers on September 8, would prohibit representations that indicate that an AI system possesses a healthcare license or can provide professional healthcare advice.

Frontier Model Safety.  Following the 2024 passage—and Governor Newsom’s subsequent veto—of the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), State Senator Scott Wiener (D-San Francisco) has led a renewed push for frontier model safety with his Transparency in Frontier AI Act (SB 53).  SB 53 would require large developers of frontier models to implement and publish a “frontier AI framework” to mitigate potential public safety harms arising from frontier model development, in addition to transparency reports and incident reporting requirements.  Unlike SB 1047, SB 53 would not require developers to implement a “full shutdown” capability for frontier models, conduct third-party audits, or meet a duty of reasonable care to prevent public safety harms.  Moreover, while SB 1047 would have established civil penalties of up to 10 percent of the cost of computing power used to train any developer’s frontier model, SB 53 would establish a uniform penalty of up to $1 million per violation of any of its frontier AI transparency provisions and would only apply to developers with annual revenues above $500 million.  Although its likelihood of passage remains uncertain, SB 53 builds on several recent state efforts to establish frontier model safeguards, including the passage of the Responsible AI Safety & Education (“RAISE”) Act in New York in May and the release of a final report on frontier AI policy by California’s Frontier AI Working Group in June.

Chatbots.  Various other California bills would establish safeguards for individuals, and particularly children, that interact with AI chatbots or generative AI systems.  The Leading Ethical AI Development (“LEAD”) for Kids Act (AB 1064), which passed the Senate on September 10 and could receive a vote in the Assembly as soon as this week, would prohibit individuals or businesses from providing “companion chatbots”—generative AI systems that simulate sustained humanlike relationships through personalization, unprompted questions, and ongoing dialogue with users—to children if the companion chatbot is “foreseeably capable” of engaging in certain activities, including encouraging a child to engage in self-harm, violence, or illegal activity, offering unlicensed mental health therapy to a child, or prioritizing user validation and engagement over child safety, among other prohibited capabilities. Another AI chatbot safety bill, SB 243, passed the Assembly on September 10 and awaits final passage in the Senate.  SB 243 would require companion chatbot operators to issue recurring disclosures to minor users, implement protocols to prevent the generation of content related to suicide or self-harm, and disclose companion chatbot protocols and other information to the state.  

The bills above reflect only some of the AI legislation pending before California lawmakers ahead of their September 12 deadline for passage.  Other AI bills have already passed both chambers and now head to the Governor, including AB 316, which would prohibit AI developers or deployers from asserting that AI “autonomously” caused harm as a legal defense, and California SB 524, which would establish restrictions on the use of AI by law enforcement agencies.  Governor Newsom will have until October 12 to sign or veto these and any other AI bills that reach his desk.



Source link

Continue Reading

Trending