Connect with us

AI Research

Microsoft Research Ranks Top 20 Jobs Most Exposed To AI

Published

on


Do you think your job will be spared from disruptions by AI?

Microsoft has ranked professions according to how much they seem to overlap with the current capabilities of artificial intelligence, and the list is going viral as people speculate about which jobs are “most at risk” or “most secure” from AI.

As one TikToker put it, “My dishwasher job is safe. Thank God.”

But the truth is more complicated.

In this new report, Microsoft researchers studied 200,000 anonymized conversations with Microsoft’s AI-powered assistant Copilot and its performance with occupational data to see which professional tasks have the most crossover with AI’s capabilities. Using worker surveys, researchers created an “AI applicability score,” and jobs with a higher score were the most likely to be impacted by AI.

But if your job is on this list, don’t fret: It doesn’t yet mean these jobs for humans are going to be obsolete anytime soon.

“Our study explores which job categories can productively use AI chatbots; it does not provide evidence that AI can replace jobs,” Kiran Tomlinson, the lead study author and senior Microsoft researcher, told HuffPost in an emailed statement.

But the professions on this list do offer clues as to which industries are facing the greatest upheaval from AI.

Interpreters, translators and historians topped the list — but one expert interpreter thinks the study misunderstands their job.

Jobs in the trades had the least AI applicability, while jobs involving written communication had among the most.

The Microsoft study suggests that having a college degree will not protect you from your job getting upended by AI. “In terms of education requirements, we find higher AI applicability for occupations requiring a bachelor’s degree than occupations with lower requirements,” the study stated.

People who often sit at desks typing away on computers, also known as “knowledge workers,” are the ones whose jobs are most likely to top the list. They include interpreters, translators, historians and writers.

“Our research shows that AI supports many tasks, particularly those involving research, writing and communication, but does not indicate it can fully perform any single occupation,” Tomlinson said.

Here were the top 20 jobs with the highest AI overlap, according to Microsoft research:

Interpreters and translators
Historians
Passenger attendants
Sales representatives of services
Writers and authors
Customer service representatives
Computer numerical control tool programmers
Telephone operators
Ticket agents and travel clerks
Broadcast announcers and radio DJs
Brokerage clerks
Farm and home management educators
Telemarketers
Concierges
Political scientists
News analysts, reporters, journalists
Mathematicians
Technical writers
Proofreaders and copy markers
Hosts and hostesses

The work of interpreters and translators had the highest overlap with current AI chatbot capabilities, but an expert interpreter said the study simplifies the demands of those jobs.

The study findings give “a clinical answer based on data, and it’s not at all taking into account … the complexity of what language is all about,” said Bridget Hylak, a court-certified interpreter and current administrator of the American Translators Association’s language technology division.

For one, interpreting and translation are two distinct professions, Hylak said, and “we see much more longevity with a traditional interpreter role for the time being than we would with traditional translators.” That’s because translators focus on written communication, but interpreters have to do real-time, high-stakes interpretations of human interactions in medicine, law and foreign policy in which a mistake could “cause a war,” Hylak noted.

Hylak said AI-backed tools like Google Translate can help translate the lowest-stake communications, like letters to friends, “but an official document where the stakes are high, where life or liberty is on the line, or someone’s health is on the line, those kind of things really do need a human in the loop,” Hylak said.

In other words, an AI chatbot cannot replace the knowledge and relationship-building skills of an interpreter when they matter most, like in a hospital or courtroom: “These are the kinds of things people don’t want to take a chance on, don’t want to get sued,” Hylak said.

Top 40 Jobs That Chatbot AI Can’t Yet Do Well

Although there are many jobs that are currently facing AI disruption, there are still many professions that require skills that only a human can do.

Physically demanding jobs that require manual labor and communicating with people face-to-face had lower AI applicability scores, according to the Microsoft report.

People in the trades who operate heavy machinery, like dredge operators, overall had jobs that don’t overlap as much with what AI can currently do. Likewise for many health care and service workers, like massage therapists and housekeepers.

Here is the list of 40 jobs with the least AI overlap, according to the study:

Phlebotomists
Nursing assistants
Hazardous materials removal workers
Helpers (painters and plasterers)
Embalmers
Plant and systems operators
Oral and maxillofacial surgeons
Automotive glass installers and repair[er]s
Ship engineers
Tire repairers and changers
Prosthodontists
Helpers (production workers)
Highway maintenance workers
Medical equipment preparers
Packaging and filling machine operators
Machine feeders and offbearers
Dishwashers
Cement masons and concrete finishers
Supervisors of firefighters
Industrial truck and tractor operators
Ophthalmic medical technicians
Massage therapists
Surgical assistants
Tire builders
Roofer helpers
Gas compressor and gas pumping station operators
Roofers
Roustabouts in the oil and gas industry
Maids and housekeeping cleaners
Paving, surfacing and tamping equipment operators
Logging equipment operators
Motorboat operators
Orderlies
Floor sanders and finishers
Pile driver operators
Rail-track laying and maintenance equipment operators
Foundry mold and coremakers
Water treatment plant and system operators
Bridge and lock tenders
Dredge operators

But before you consider these jobs “safe” from AI’s encroaching influence, know that artificial intelligence is coming for every industry. The anxiety over losing your job to AI remains very real, even if that’s not exactly what the Microsoft research concluded.

In a report released this week by outplacement firm Challenger, Gray & Christmas, U.S.-based employers cited the adoption of generative AI technology as the reason for more than 10,000 job cuts this year.

“Everyone inevitably will be impacted by AI use in logistics and commerce and medicine and law,” Hylak said. “The speed with which we adapt, with which we educate and train people to use it properly, is really going to be key.”

She used her own profession as an example: “It’s only by utilizing these technologies that we will be able to stay in business, any one of us, if we’re serious as a linguist.”





Source link

AI Research

Albania appoints world’s first AI-made minister – POLITICO

Published

on


The pro-Brexit politician is set to visit Tirana to settle a debate with the country’s prime minister over how many Albanians are in U.K. prisons, but what else could be on his itinerary while he’s there?


Jul 11


7 mins read





Source link

Continue Reading

AI Research

AI & Elections – McCourt School – Public Policy

Published

on


How can artificial intelligence transform election administration while preserving public trust? This spring, the McCourt School of Public Policy partnered with The Elections Group and Discourse Labs to tackle this critical question at a groundbreaking workshop.

The day-long convening brought together election officials, researchers, technology experts and civil society leaders to chart a responsible path forward for AI in election administration. Participants focused particularly on how AI could revolutionize voter-centric communications—from streamlining information delivery to enhancing accessibility.

The discussions revealed both promising opportunities for public service innovation and legitimate concerns about maintaining institutional trust in our democratic processes. Workshop participants developed a comprehensive set of findings and actionable recommendations that could shape the future of election technology.

Expert Insights from Georgetown’s Leading Researchers

To unpack the workshop’s key insights, we spoke with two McCourt School experts who are at the forefront of this intersection between technology and democracy:

Ioannis Ziogas is an Assistant Teaching Professor at the McCourt School of Public Policy, an Assistant Research Professor at the Massive Data Institute, and Associate Director of the Data Science for Public Policy program. His work bridges the gap between cutting-edge data science and real-world policy challenges.

Headshot of Lia Merivaki

Lia Merivaki is an Associate Teaching Professor at the McCourt School of Public Policy and Associate Research Professor at the Massive Data Institute, where she focuses on the practical applications of technology in democratic governance, particularly election integrity and voter confidence.

Together, they address five essential questions about AI’s role in election administration—and what it means for voters, officials and democracy itself.

Q1

How is AI currently being used in election administration, and are there particular jurisdictions that are leading in adoption?

Ioannis: When we talk about AI in elections, we need to clarify that it is not a single technology but a family of approaches, from predictive analytics to natural language processing to generative AI. In practice, election officials are already using generative AI routinely for communication purposes such as drafting social media posts and shaping public-facing messages. These efforts aim to increase trust in the election process and make information more accessible. Some offices have even experimented with using generative AI to design infographics, though this can be tricky due to hallucinations or inaccuracies. More recently, local election officials are exploring AI to streamline staff training, operations, or to summarize complex legal documents.

Our work focuses on alerting election officials to the limitations of generative AI, such as model drift and bias propagation. A key distinction we emphasize in our research is between AI as a backend administrative tool (which voters may never see) and AI as a direct interface with the public (where voter trust and transparency become central). We believe that generative AI tools can be used in both contexts, provided that there is awareness of the challenges and limitations.

Lia: Election officials have been familiar with AI for quite some time, primarily to understand how to mitigate AI-generated misinformation. A leader in this space has been the Arizona Secretary of State Adrian Fontes, who conducted the first of its kind deepfake detection tabletop exercise in preparation for the 2024 election cycle. 

We’ve had conversations with election officials in California, New Mexico, North Carolina, Florida, Maryland and others whom we call early adopters, with many more being ‘AI curious.’

Q2

Security is always a big concern when it comes to the use of AI. Talk about what risks are introduced by bringing AI into election administration, and conversely, how AI can help detect and prevent any type of election interference and voter fraud.

Ioannis: From my perspective, the core security challenge is not only technical but also about privacy and trust. AI systems, by design, rely on large volumes of data. In election contexts, this often includes sensitive voter information. Even when anonymized, the use of personal data raises concerns about surveillance, profiling, or accidental disclosure. Another risk relates to delegating sensitive tasks to AI, which can render election systems vulnerable to adversarial attacks or hidden biases baked into the models. 

At the same time, AI can support security: machine learning can detect coordinated online influence campaigns, identify anomalous traffic to election websites, or flag irregularities that warrant further human review. In short, I view AI as both a potential shield and a potential vulnerability, which is why careful governance and transparency are essential. That is why I believe it is critical to pair AI adoption with clear safeguards, training and guidance, so that officials can use these tools confidently and responsibly.

Lia: A potential risk we are trying to mitigate is the impact of relying on AI for important administrative tasks on voter trust. For instance, voters who call/email their election official and expect to talk with them, but instead interact with a chatbot, may feel disappointed and in turn not trust the information as well as the election official. There is also some evidence that voters do not trust information that is generated with AI, particularly when its use is disclosed.

As for detecting and preventing any irregularities, over-reliance on AI can be problematic and can lead to disenfranchisement. To illustrate, AI can help identify individuals in voter records whose information is missing, which would seemingly make the process of maintaining accurate lists more efficient. The election office can send a letter to these individuals to verify they are citizens, and ask for their information to be updated. This seems like a sound practice; however, it violates federal law, and it risks making eligible voters feel intimidated, or having their eligibility challenged by bad actors. The reality is that maintaining voter records is a highly complex process, and data entry errors are very common. Deploying AI models to substitute existing practices in election administration such as voter list maintenance – with the goal of detecting whether non-citizens register or whether dead dead voters exist in voter rolls – can harm voters and undermine trust.

Q3

What are the biggest barriers to AI adoption in election administration – technical, financial, or political?

Lia: There are significant skills and knowledge gaps among election officials when it comes to utilizing technology generally, and we see such gaps with AI adoption, which is not surprising. Aside from technical barriers, election offices are under-resourced, especially at the local jurisdiction level. We observe that policies around AI adoption in public administration generally, and election administration specifically, are sparse at the moment.

While the election community invested a lot of resources to safeguard the election infrastructure against the threats of AI, we are not seeing – yet – a proportional effort to educate and prepare election officials on how to use AI to improve elections. To better understand the landscape of AI adoption and how to best support the election community, we hosted an exploratory workshop at McCourt in April 2025, in collaboration with The Elections Group and Discourse Labs. In this workshop, we brought together election officials, industry, civil society leaders and other practitioners to discuss how AI tools are used by election officials, what technical barriers exist and how to move forward with designing policies on ethical and responsible use of AI in election administration. Through this workshop, we identified a list of priorities which require close collaboration among the election community, academia, civil society and industry, to ensure that the adoption of AI is done responsibly, ethically and efficiently, without negatively affecting the voter experience.

Ioannis: I would highlight that barriers are not just about resources but also about institutional design. Election officials often work in environments of high political scrutiny but low budgets and limited technical staff. Introducing AI tools into that context requires financial investment and clear guidance on how to evaluate these systems: what counts as success, how to measure error rates and how to align tools with federal and state regulations. Beyond that, there is a cultural barrier. Many election officials are understandably cautious; they’ve spent the past decade defending democracy against disinformation and cyber threats, so embracing new technologies requires trust and confidence that AI will not introduce new risks. That is why partnerships with universities and nonpartisan civil-society groups are critical: they provide a space to pilot ideas, build capacity, and translate research into practice.

Our two priorities are to help narrow the skills gap and build frameworks for ethical and responsible AI use. At McCourt, we’re collaborating with the Arizona State University’s Mechanics of Democracy Lab, which is developing training materials and custom-AI products for election officials. Drawing on our background in AI and elections, we aim to provide election officials with a practical resource that maps out both the risks and the potential of these tools, and that helps them identify ideal use cases where AI can enhance efficiency without compromising trust or voter experience.

Q4

Looking ahead, what emerging AI technologies could transform election administration in the next 5-10 years?

Lia: It’s hard to predict really. At the moment we are seeing high interest from vendors and election officials to integrate AI into elections. Concerns about security and privacy will undoubtedly shape the discussion about what AI can do for the election infrastructure. It could be possible that we see a liberal approach to using AI technologies to communicate with voters, produce training materials, translate election materials into non-English languages, among others. That said, elections are run by humans, and maintaining public trust relies on having “humans in the – elections – loop.” This, coupled with ongoing debates about how AI should or should not be regulated, may result in more guardrails and restrictions over time.

Ioannis: One promising direction is multimodal AI: systems that process text, audio and images together. For election officials, this could mean automatically generating plain-language guides, sign-language translations, or sample audio ballots to improve accessibility. But these same tools can amplify risks if their limitations are not understood. For that reason, any adoption will need to be coupled with auditing, transparency and education for election staff, so they view AI as a supportive tool rather than a replacement platform or a black box.

Q5

What guidelines or regulatory frameworks are needed to govern AI use in elections?

Ioannis: We urgently need a baseline framework that establishes what is permissible, what requires disclosure, and what is off-limits. Today, election officials are experimenting with AI in a largely unregulated space, and they are eager for guidance. A responsible framework should include at least three elements: a) transparency: voters should know when AI-generated materials are used in communications; b) accountability: human oversight should retain the final authority, with AI serving only as a support; and c) auditing: independent experts must be able to test and evaluate these tools for accuracy, bias and security.



Source link

Continue Reading

AI Research

AI Transformation in NHS Faces Key Challenges: Study

Published

on


Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, harmonisation with old IT systems and finding the right AI tools and staff training, finds a major new UK study led by UCL researchers.

The authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience.

In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.

Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.

Evidence from previous studies, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.

In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.

They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.

The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.

First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.

“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.

“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”

The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.

Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.

The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.

Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.

In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.

Senior author Professor Naomi Fulop (UCL Department of Behavioural Science and Health) said: “In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.

“The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”

Limitations

While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.



Source link

Continue Reading

Trending