AI Insights
Computer says…Which jobs are safest from AI?

Google chief executive Sundar Pichai says ramping up artificial intelligence capabilities in products is resulting in people using them more and increased demand for its cloud computing services – Copyright AFP Glenn CHAPMAN
It is a commonly discussed issue these days – the jobs that are most at risk and, conversely, seemingly the best protected from the advance of artificial intelligence. Across different surveys on the subject there are patterns of similarity and difference.
According to a new review, emergency medical technicians rank as the most AI-resistant due to high public interaction requirements. Overall, healthcare dominates the rankings, with 3 medical professions in the top five most secure jobs, from the perspective of the U.S. economy.
With artificial intelligence transforming workplaces at a rapid pace, over 73% of U.S. companies now use AI in at least one business area. A new study by Eskimoz analysed occupational data across multiple job categories to identify which careers remain most resistant to AI replacement.
The research evaluated each occupation using two factors: the percentage of public interaction required and automation risk scores from industry assessments. Jobs were ranked using an AI Resistance Score calculated by combining inverted automation risk (where lower risk yields higher scores) with public interaction percentages, then normalized on a 1-100 scale to identify positions where human skills remain irreplaceable.
The most AI resistant jobs?
Occupation and occupational group | Percent who must interact with the general public | Automation Risk | Score |
Emergency medical technicians | 100.00% | 7% | 100 |
Healthcare social workers | 100% | 11% | 98 |
Lawyers | 100% | 29% | 86 |
Medical and health services managers | 89.80% | 26% | 82 |
First-line supervisors of construction trades and extraction workers | 78.50% | 17% | 80 |
Human resources managers | 82.90% | 26% | 78 |
General and operations managers | 80.30% | 36% | 70 |
Maintenance and repair workers, general | 71.60% | 35% | 65 |
First-line supervisors of office and administrative support workers | 81.60% | 50% | 62 |
Training and development specialists | 57.80% | 29% | 60 |
Based on the above assessment, emergency medical technicians rank first with the highest AI Resistance Score of 100. EMTs require 100% public interaction in critical medical emergencies where human judgment proves irreplaceable, leading to only 7% automation risk, the lowest among all occupations studied.
Healthcare social workers secure second place with a score of 98. These professionals require 100% public interaction, providing emotional support and crisis intervention that demands genuine human involvement. This results in a low 11% automation risk.
Lawyers rank third with an AI Resistance Score of 86. Legal professionals maintain 100% public engagement through client consultations and courtroom advocacy, facing only 29% automation risk.
Medical and health services managers take fourth place, scoring 81.82 in AI resistance. These professionals show 89.8% public engagement while coordinating between patients, medical staff, and administrators. Healthcare management’s interpersonal skill requirements result in a 26% automation risk.
First-line supervisors of construction trades rank fifth, following medical professionals closely in AI resistance score. These supervisors post a 78.5% public interaction rate while managing teams and ensuring safety compliance. Their on-site leadership and problem-solving abilities lead to just 17% automation risk.
Human resources managers secure sixth place with a 78 AI Resistance Score. The HR professionals demonstrate 82.9% public interaction through employee relations and conflict resolution. HR management’s focus on workplace dynamics puts them at a low 26% automation risk, similar to medical professionals.
General and operations managers rank seventh, scoring 70. With 80.3% public interaction, these leaders coordinate across departments and manage stakeholder relationships. Their role puts them at 36% automation risk.
Maintenance and repair workers take eighth place with a score of 65. These professionals show 71.6% public interaction while diagnosing problems and explaining repairs to customers. Practical problem-solving and customer service combine to resist automation, resulting in a 35% automation risk.
First-line supervisors of office workers rank ninth with a 62-point AI resistance score. These supervisors maintain 81.6% public engagement through team management. Human elements in motivation and conflict resolution remain substantial despite a 50% automation risk – the highest in the top 10.
Training and development specialists complete the top 10, coming just behind first-line supervisors. While showing 57.8% public interaction, these professionals create personalized learning experiences. Understanding individual needs and inspiring professional growth remains human work, with a 29% automation risk.
AI Insights
DVIDS – News – ARMY INTELLIGENCE

by Holly Comanse
Artificial intelligence (AI) may be trending, but it’s nothing new. Alan Turing, an English mathematician said to be the “father of theoretical computer science,” conceptualized the term algorithm to solve mathematical problems and test machine intelligence. Not long after, the very first AI chatbot, called ELIZA, was released in 1966. However, it was the generative pre-trained transformer (GPT) model, first introduced in 2018, that provided the foundation for modern AI chatbots.
Both AI and chatbots continue to evolve with many unknown variables related to accuracy and ethical concerns, such as privacy and bias. Job security and the environmental impact of AI are other points of contention. While the unknown may be distressing, it’s also an opportunity to adjust and adapt to an evolving era.
The 1962 television cartoon “The Jetsons” presented the concept of a futuristic family in the year 2062, with high-tech characters that included Rosie the Robot, the family’s maid, and an intelligent robotic supervisor named Uniblab. Such a prediction isn’t much of a stretch anymore. Robotic vacuums, which can autonomously clean floors, and other personal AI assistant devices are now commonplace in the American household. A recent report valued the global smart home market at $84.5 billion in 2024 and predicted it would reach $116.4 billion by 2029.
ChatGPT, released in 2022, is a widely used AI chat-based application with 400 million active users. It independently browses the internet, allowing for more up-to-date and automatic results, but it’s used for conversational topics, not industry-specific information. The popularity and limitations of ChatGPT led some organizations to develop their own AI chatbots with the ability to reflect current events, protect sensitive information and deliver search results for company-specific topics.
One of those organizations is the U.S. Army, which now has an Army-specific chatbot known as CamoGPT. Currently boasting 75,000 users, CamoGPT started development in the fall of 2023 and was first deployed in the spring of 2024. Live data is important for the Army and other companies that choose to implement their own AI chat-based applications. CamoGPT does not currently have access to the internet because it is still in a prototype phase, but connecting to the net is a goal. Another goal for the platform is to accurately respond to questions that involve current statistics and high-stakes information. What’s more, CamoGPT can process classified data on SIPRNet and unclassified information on NIPRNet.
THE MORE THE MERRIER
Large language models (LLMs) are a type of AI chatbot that can understand and generate human language based on inputs. LLMs undergo extensive training, require copious amounts of data and can be tedious to create. They can also process and respond to information as a human would. Initially, the information fed to the bot must be input individually and manually by human beings until a pattern is established, at which point the computer can take over. Updating facts can be a daunting task when considering the breadth of data from around the world that AI is expected to process.
Aidan Doyle, a data engineer at the Army Artificial Intelligence Integration Center (AI2C), works on a team of three active-duty service members, including another data engineer and a data analyst, as well as four contracted software developers and one contracted technical team lead. “It’s a small team, roles are fluid, [and] everyone contributes code to Camo[GPT],” Doyle said.
Doyle’s team is working to transition CamoGPT into a program of record and put more focus into developing an Army-specific LLM. “An Army-specific LLM would perform much better at recognizing Army acronyms and providing recommendations founded in Army doctrine,” Doyle said. “Our team does not train LLMs; we simply host published, open-source models that have been trained by companies like Meta, Google and Mistral.”
The process of training LLMs involves pre-training the model by showing it as many examples of natural language as possible from across the internet. Everything from greetings to colloquialisms must be input so it can mimic human conversation. Sometimes, supervised learning is necessary for specific information during the fine-tuning step. Then the model generates different answers to questions, and humans evaluate and annotate the model responses and flag problems that arise. Once preferred responses are identified, developers adjust the model accordingly. This is a post-training step called reinforcement learning with human feedback, or alignment. Finally, the model generates both the questions and answers itself in the self-play step. When the model is ready, it is deployed.
A LITTLE TOO CREATIVE
The use of AI in creative fields faces significant challenges, such as the potential for plagiarism and inaccuracy. Artists spend a lot of time creating work that can easily be duplicated by AI without giving the artist any credit. AI can also repackage copyrighted material. It can be tough to track down the original source for something when it is generated with AI.
AI can—and sometimes does—introduce inaccuracies. When AI fabricates information unintentionally, it is called a hallucination. AI can make a connection between patterns it recognizes and pass them off as truth for a different set of circumstances. Facts presented by AI should always be verified, and websites such as CamoGPT often come with a disclaimer that hallucinations are possible. Journalists and content creators should be cautious with AI use as they run the risk of spreading misinformation.
Images and videos can also be intentionally manipulated. For example, AI-generated content on social media can be posted for shock value. Sometimes it’s easy to spot when something has been created by AI, and you can take it with a grain of salt. In other cases, social media trends can go viral before they are vetted.
For these reasons, the Army decided to implement CamoGPT. Not only does it currently have the ability to process classified information discreetly, but developmental advances will also ensure that CamoGPT provides minimal errors in its responses.
CONCLUSION
It’s becoming clear that analog is on the way out. Even older websites like Google and other search engines have started to prioritize AI summary results. Utilizing technology like AI, LLMs and other chatbots can save time and automate tedious tasks, which increases productivity and efficiency. CamoGPT is still evolving, and the team at AI2C is working hard to improve its accuracy and abilities. Other AI systems within the Army are still being developed, but the potential is limitless. While we may not be living in the future that the creator of “The Jetsons” predicted, we’re getting closer. In another 37 years, when 2062 rolls around, we may all be using flying cars—and those vehicles just might drive themselves with the help of AI.
For more information, go to https://www.camogpt.army.mil/camogpt.
HOLLY COMANSE provides contract support to the U.S. Army Acquisition Support Center from Honolulu, Hawaii, as a writer and editor for Army AL&T magazine and TMGL, LLC. She previously served as a content moderator and data specialist training artificial intelligence for a news app. She holds a B.A. in journalism from the University of Nevada, Reno.
AI Insights
Big tech is offering AI tools to California students. Will it save jobs?
By Adam Echelman, CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.
As artificial intelligence replaces entry-level jobs, California’s universities and community colleges are offering a glimmer of hope for students: free AI training that will teach them to master the new technology.
“You’re seeing in certain coding spaces significant declines in hiring for obvious reasons,” Gov. Gavin Newsom said Thursday during a press conference from the seventh floor of Google’s San Francisco office.
Flanked by leadership from California’s higher education systems, he called attention to the recent layoffs at Microsoft, at Google’s parent company, Alphabet, and at Salesforce Tower, just a few blocks away, home to the tech company that is still the city’s largest private employer.
Now, some of those companies — including Google and Microsoft — will offer a suite of AI resources for free to California schools and universities. In return, the companies could gain access to millions of new users.
The state’s community colleges and its California State University campuses are “the backbone of our workforce and economic development,” Newsom said, just before education leaders and tech executives signed agreements on AI.
The new deals are the latest developments in a frenzy that began in November 2022, when OpenAI publicly released the free artificial intelligence tool ChatGPT, forcing schools to adapt.
The Los Angeles Unified School District implemented an AI chatbot last year, only to cancel it three months later without disclosing why. San Diego Unified teachers started using AI software that suggested what grades to give students, CalMatters reported. Some of the district’s board members were unaware that the district had purchased the software.
Last month, the company that oversees Canvas, a learning management system popular in California schools and universities, said it would add “interactive conversations in a ChatGPT-like environment” into its software.
To combat potential AI-related cheating, many K-12 and college districts are using a new feature from the software company Turnitin to detect plagiarism, but a CalMatters investigation found that the software accused students who did real work instead.
Mixed signals?
These deals are sending mixed signals, said Stephanie Goldman, the president of the Faculty Association of California Community Colleges. “Districts were already spending lots of money on AI detection software. What do you do when it’s built into the software they’re using?”
Don Daves-Rougeaux, a senior adviser for the community college system, acknowledged the potential contradiction but said it’s part of a broader effort to keep up with the rapid pace of changes in AI. He said the community college system will frequently reevaluate the use of Turnitin along with all other AI tools.
California’s community college system is responsible for the bulk of job training in the state, though it receives the least funding from the state per student.
“Oftentimes when we are having these conversations, we are looked at as a smaller system,” said Daves-Rougeaux. The state’s 116 community colleges collectively educate roughly 2.1 million students.
In the deals announced Thursday, the community college system will partner with Google, Microsoft, Adobe and IBM to roll out additional AI training for teachers. Daves-Rougeaux said the system has also signed deals that will allow students to use exclusive versions of Google’s counterpart to ChatGPT, Gemini, and Google’s AI research tool, Notebook LLM. Daves-Rougeaux said these tools will save community colleges “hundreds of millions of dollars,” though he could not provide an exact figure.
“It’s a tough situation for faculty,” said Goldman. “AI is super important but it has come up time and time again: How do you use AI in the classroom while still ensuring that students, who are still developing critical thinking skills, aren’t just using it as a crutch?”
One concern is that faculty could lose control over how AI is used in their classrooms, she added.
The K-12 system and Cal State University system are forming their own tech deals. Amy Bentley-Smith, a spokesperson for the Cal State system, said it is working on its own AI programs with Google, Microsoft, Adobe and IBM as well as Amazon Web Services, Intel, LinkedIn, Open AI and others.
Angela Musallam, a spokesperson for the state government operations agency, said California high schools are part of the deal with Adobe, which aims to promote “AI literacy,” the idea that students and teachers should have basic skills to detect and use artificial intelligence.
Much like the community college system, which is governed by local districts, Musallam said individual K-12 districts would need to approve any deal.
Will deals make a difference to students, teachers?
Experts say it’s too early to tell how effective AI training will actually be.
Justin Reich, an associate professor at MIT, said a similar frenzy took place 20 years ago when teachers tried to teach computer literacy. “We do not know what AI literacy is, how to use it, and how to teach with it. And we probably won’t for many years,” Reich said.
The state’s new deals with Google, Microsoft, Adobe and IBM allow these tech companies to recruit new users — a benefit for the companies — but the actual lessons aren’t time-tested, he said.
“Tech companies say: ‘These tools can save teachers time,’ but the track record is really bad,” said Reich. “You cannot ask schools to do more right now. They are maxed out.”
Erin Mote, the CEO of an education nonprofit called InnovateEDU, said she agrees that state and education leaders need to ask critical questions about the efficacy of the tools that tech companies offer but that schools still have an imperative to act.
“There are a lot of rungs on the career ladder that are disappearing,” she said. “The biggest mistake we could make as educators is to wait and pause.”
Last year, the California Community Colleges Chancellor’s Office signed an agreement with NVIDIA, a technology infrastructure company, to offer AI training similar to the kinds of lessons that Google, Microsoft, Adobe and IBM will deliver.
Melissa Villarin, a spokesperson for the chancellor’s office, said the state won’t share data about how the NVIDIA program is going because the cohort of teachers involved is still too small.
This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.
AI Insights
Warren County Schools hosts AI Workshop

KENTUCKY — On this week’s program, we’re keeping you up to date on happenings within Kentucky’s government, which includes ongoing work this summer with legislative committees and special task forces in Frankfort.
During this “In Focus Kentucky” segment, reporter Aaron Dickens shared how leaders in Warren County Public Schools are helping educators bring their new computer science knowledge to the front of classrooms.
Also in this segment, we shared details about the U.S. Department of Energy selecting the Paducah Gaseous Diffusion Plant site in Paducah, Kentucky, as one of four sites for the development of artificial intelligence data centers and associated energy infrastructure. This initiative is reportedly part of the Trump administration’s plan to accelerate AI development, hoping to leverage federal land assets to establish high-performance computing facilities and reliable energy sources for the burgeoning AI industry.
You can watch the full “In Focus Kentucky” segment in the player above.
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Funding & Business1 month ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 month ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education1 month ago
VEX Robotics launches AI-powered classroom robotics system
-
Education1 month ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Mergers & Acquisitions1 month ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Jobs & Careers1 month ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Podcasts & Talks1 month ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks1 month ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers1 month ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure