Connect with us

AI Insights

AI revolution: How artificial intelligence is reshaping education and jobs in America

Published

on


Artificial intelligence has rapidly become a part of American’s lives. What once was a fringe concept a few years ago is now an everyday tool.

Its expansive reach affects what and how students study, as well as the job sector, prompting some to question how students and higher education at large should respond.

The best way an undergrad can prepare for an AI-altered workforce is to develop human qualities that machines cannot replicate, such as critical thinking, creativity, and social intelligence, some experts told The College Fix.

While the value of specific majors may diminish, careers in mental health, healthcare, and fields requiring high-level decision-making and management will remain viable, they said.

But make no mistake, the role of humans will increasingly center on collaboration with AI.

AI will be a job killer. It will also be a job creator.

While some jobs will be eliminated, others will be created.

“The amount of work that’s being created and the opportunities to both create and contribute are going to be expanded exponentially,” said corporate advisor Jack Myers, a University of Arizona lecturer in its School of Information Science.

Forecasts predicting the coming obsolescence of countless careers should be viewed “through the prism of not only what’s going to be eliminated, but what’s going to be created,” he told The College Fix in a telephone interview.

Jobs in coding, basic processing, routine bookkeeping, low-complexity customer service and translation will all soon be eliminated, Myers said.

But the opportunities ushered in by AI are going to be exceptional, said Myers, author of the book “The Tao of Leadership: Harmonizing Technological Innovation and Human Creativity in the AI Era.”

“If you look at almost any area of human creation,” Myers said, “it will be enhanced through the same type of collaborative partnership as if the creator was hiring an expert to assist and support in the process.”

Joey Kim, chair of the Department of Engineering and Computer Science at Master’s University, described AI as “simply a tool.”

“With the advent of new tools, careers do disappear,” Kim said in a telephone interview with The Fix. “There’s also careers that get modified…. It’s not simply binary and careers [either] remain unaffected or [become] obsolete. There is a spectrum.”

But like it or not, AI will be part of many jobs, said Michael Pavlin, an associate professor in the School of Business and Economics at Wilfrid Laurier University, who has been involved in AI research since the early 2000s and serves as the chair of his school’s management analytics program.

“It’s hard to imagine a white collar job where you’re not going to be interacting with AI at some level,” he said in a telephone interview.

However, despite recent AI advances, he said he remains “more on the skeptical side,” later adding he believes “we’re being a little bit oversold.”

Reva Freedman, an associate professor of computer science at Northern Illinois University with expertise in computational linguistics, said AI “is going to have a huge impact on the job market, but not different in kind to the effect that computerization had with the invention of the PC in 1983.”

“In offices, [b]efore the invention of the PC, lots of people had jobs as secretaries and clerks. Secretaries typed memos that other people wrote. Those jobs have been largely replaced by people using word processors themselves,” she said via email. “Clerks did a variety of jobs that have been automated by use of Excel and other software.”

The jobs that will survive require high-level thinking, management skills, or require hands-on work, such as medicine, Freedman said.

Gary Clemenceau, a “deep geek” turned chaplain and author, who claims 30 years of experience in tech, agrees. He told The Fix that “mental health and healthcare jobs, and anything that requires dealing with humans and higher-order thinking, will still be viable.”

AI and the dumbing-down of higher education

But will there be any higher-order thinking left?

“For teachers, it’s absolutely impossible to give a writing [assignment] today that students can’t cheat on,” Freedman said. “Even for an in-class assignment, you can now get glasses that allow you to look up stuff on the web during an exam.”

Kim said the misuse of AI in the classroom devalues a degree’s representation of how well one has been trained in a program and successfully met its requirements.

Freedman also expressed concerns over the misuse of AI in other segments of society, citing allegations it was used to write a recent MAHA report said to contain made-up citations.

Pavlin told The Fix he is more concerned about less obvious errors that require a greater level of expertise to detect. For example, when querying AI about esoteric subjects related to his research, he tends to find deeper ways in which AI makes mistakes than he would if he similarly asked AI a question about general relativity.

In that sense, AI is not bulletproof. Kim echoed similar sentiments: “When big important decisions must be made where it’s either life-or-death or costing millions and millions of dollars, you’re going to need something more than ChatGPT.”

Yet, as some of the scholars interviewed by The Fix noted, the increasing overuse of AI by students may lead to the attrition of capacities beyond their proficiency at using ChatGPT.

“I think it’s impacting their learning,” Pavlin said. “Not all students, but [there is] definitely a subset of students where I’m concerned about their critical thinking skills.”

AI and the college student

When asked how students could best prepare for the careers that await them in an AI-altered job market, most of the scholars interviewed recommended they develop their more uniquely human attributes.

“The machines are already smarter than the human brain in many instances,” Myers told The Fix. “[They have] been for a while and that’s just going to continue to become increasingly the norm.”

“So where does the human come in?” Myers asked rhetorically, answering that humans enter through the “collaborative process” and “the unique human qualities of the human brain.” These he said are developed in the social sciences and humanistic majors.

Clemenceau said students must develop their human qualities.

“Students need to put down their phones and THINK,” Clemenceau wrote in an email to The College Fix. “AI is not very good at being creative.”

Whether majoring in computer science and learning to code is still a wise choice was a point of some disagreement.

“Coding will be irrelevant as a tool or resource to bring to the table,” Myers said. “The AI is doing its own coding going forward. It doesn’t need the human coders anymore.”

In contrast, Freedman noted that people “have been saying ever since I was a beginning programmer (in the 70’s) that programs that can write programs were coming.”

“Is it more true now? Probably. Does that mean the [number] of programmers needed will go down? T[h]at’s a much harder question to answer.”

“I think there will always be room for people who care about the quality of their work, understand the business needs, and can communicate with non-programmers,” she said.

As for choosing a major, though, she added: “I don’t think students’ majors have a lot to do with their success in the work world; their personal qualities are a lot more important. So I don’t think we can tell students what majors will be more useful.”

Kim expressed similar sentiments, saying “I personally believe that with any major, if you’re going to be using your tools to your advantage, and if you’re really going to be motivated enough to not just follow the crowd, you will have a job.”

Clemenceau said the future may be bleaker than his optimistic peers.

“I see two roads,” he said via email. “A small percentage of people will reject AI as inhuman and soulless and empty, and take the ‘human road’ as much as possible, living more spiritual lives.”

However, he added, a “larger percentage of people will fully embrace AI and (sadly) sacrifice part of their humanity, becoming less creative, less able to think critically – and more easily manipulated.”

MORE: Using AI to write essays can impair brain function: MIT study

IMAGE CAPTION AND CREDIT: A graphic showing a laptop user employing AI / Supatman, CanvaPro

Like The College Fix on Facebook / Follow us on Twitter



Share our work - Thank you





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

AI rollout in NHS hospitals faces major challenges

Published

on


Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, contracts, data collection, harmonisation with old IT systems, finding the right AI tools and staff training, finds a major new UK study led by UCL researchers. 

Authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience. 

In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.

Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.

Evidence from previous studies¹, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.

In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.

They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.

The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.

First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.

“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.

“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”

The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.

Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.

The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.

Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.

In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.

In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.


The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”


Naomi Fulop, Senior Author, Professor UCL Department of Behavioural Science and Health

Limitations

While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.

Source:

Journal reference:

Ramsay, A. I. G., et al. (2025). Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: a rapid, mixed method evaluation. eClinicalMedicine. doi.org/10.1016/j.eclinm.2025.103481



Source link

Continue Reading

AI Insights

AI takes passenger seat in Career Center with Microsoft Copilot

Published

on


By Arden Berry | Staff Writer

To increase efficiency and help students succeed, the Career Center has created artificial intelligence programs through Microsoft Copilot.

Career Center Director Amy Rylander said the program began over the summer with teams creating user guides that described how students could ethically use AI while applying for jobs.

“We started learning about prompting AI to do things, and as we began writing the guides and began putting updates in them and editing them to be in a certain way, our data person took our guides and fed them into Copilot, and we created agents,” Rylander said. “So instead of just a user’s guide, we now have agents to help students right now with three areas.”

Rylander said these three areas were resume-building, interviewing and career discovery. She also said the Career Center sent out an email last week linking the Copilot Agents for these three areas.

“Agents use AI to perform tasks by reasoning, planning and learning — using provided information to execute actions and achieve predetermined goals for the user,” the email read.

To use these Copilot Agents, Rylander said students should log in to Microsoft Office with their Baylor email, then use the provided Copilot Agent links and follow the provided prompts. For example, the Career Discovery Agent would provide a prompt to give the agent, then would ask a set of questions and suggest potential career paths.

“It’ll help you take the skills that you’re learning in your major and the skills that you’ve learned along the way and tell you some things that might work for you, and then that’ll help with the search on what you might want to look for,” Rylander said.

Career Center Assistant Vice Provost Michael Estepp said creating AI systems was a “proactive decision.”

“We’re always saying, ‘What are the things that students are looking for and need, and what can our staff do to make that happen?’” Estepp said. “Do we go AI or not? We definitely needed to, just so we were ahead of the game.”

Estepp said the AI systems would not replace the Career Center but would increase its efficiency, allowing the Career Center more time to help students in a more specialized way.

“Students want to come in, and they don’t want to meet with us 27 times,” Estepp said. “We can actually even dive deeper into the relationships because, hopefully, we can help more students, because our goal is to help 100% of students, so I think that’s one of the biggest pieces.”

However, Rylander said students should remember to use AI only as a tool, not as a replacement for their own experience.

“Use it ethically. AI does not take the place of your voice,” Rylander said. “It might spit out a bullet that says something, and I’ll say, ‘What did you mean by that?’ and get the whole story, because we want to make sure you don’t lose your voice and that you are not presenting yourself as something that you’re not.”

For the future, Rylander said the Career Center is currently working on Graduate School Planning and Career Communications Copilots. Estepp also said Baylor has a contract with LinkedIn that will help students learn to use AI for their careers.

“AI has impacted the job market so significantly that students have to have that. It’s a mandatory skill now,” Estepp said. “We’re going to start messaging out to students different certifications they can take within LinkedIn, that they can complete videos and short quizzes, and then actually be able to get certifications in different AI and large language model aspects and then put that on their resume.”



Source link

Continue Reading

AI Insights

When Cybercriminals Weaponize Artificial Intelligence at Scale

Published

on


Anthropic’s August threat intelligence report sounds like a cybersecurity novel, except it’s terrifyingly not fiction. The report describes how cybercriminals used Claude AI to orchestrate and attack 17 organizations with ransom demands exceeding $500,000. This may be the most sophisticated AI-driven attack campaign to date.

But beyond the alarming headlines lies a more fundamental swing – the emergence of “agentic cybercrime,” where AI doesn’t just assist attackers, it becomes their co-pilot, strategic advisor, and operational commander all at once. 

The End of Traditional Cybercrime Economics

The Anthropic report highlights a cruel reality that IT leaders have long feared. The economics of cybercrime have undergone significant change. What previously required teams of specialized attackers working for weeks can now be accomplished by a single individual in a matter of hours with AI assistance.

For example, the “vibe hacking” operation is detailed in the report. One cybercriminal used Claude Code to automate reconnaissance across thousands of systems, create custom malware with anti-detection capabilities, perform real-time network penetration, and analyze stolen financial data to calculate psychologically optimized ransom amounts. 

More than just following instructions, the AI made tactical decisions about which data to exfiltrate and crafted victim-specific extortion strategies that maximized psychological pressure. 

Sophisticated Attack Democratization

One of the most unnerving revelations in Anthropic’s report involves North Korean IT workers who have infiltrated Fortune 500 companies using AI to simulate technical competence they don’t have. While these attackers are unable to write basic code or communicate professionally in English, they’re successfully maintaining full-time engineering positions at major corporations thanks to AI handling everything from technical interviews to daily work deliverables. 

The report also discloses that 61 percent of the workers’ AI usage focused on frontend development, 26 percent on programming tasks, and 10 percent on interview preparation. They are essentially human proxies for AI systems, channeling hundreds of millions of dollars to North Korea’s weapons programs while their employers remain unaware. 

Similarly, the report reveals how criminals with little technical skill are developing and selling sophisticated ransomware-as-a-service packages for $400 to $1,200 on dark web forums. Features that previously required years of specialized knowledge, such as ChaCha20 encryption, anti-EDR techniques, and Windows internals exploitation, are now generated on demand with the aid of AI. 

Defense Speed Versus Attack Velocity

Traditional cybersecurity operates on human timetables, with threat detection, analysis, and response cycles measured in hours or days. AI-powered attacks, on the other hand, operate at machine speed, with reconnaissance, exploitation, and data exfiltration occurring in minutes. 

The cybercriminal highlighted in Anthropic’s report automated network scanning across thousands of endpoints, identified vulnerabilities with “high success rates,” and crossed through compromised networks faster than human defenders could respond. When initial attack vectors failed, the AI immediately generated alternative attacks, creating a dynamic adversary that adapted in real-time. 

This speed delta creates an impossible situation for traditional security operations centers (SOCs). Human analysts cannot keep up with the velocity and persistence of AI-augmented attackers operating 24/7 across multiple targets simultaneously. 

Asymmetry of Intelligence

What makes these AI-powered attacks particularly dangerous isn’t only their speed – it’s their intelligence. The criminals highlighted in the report utilized AI to analyze stolen data and develop “profit plans” by incorporating multiple monetization strategies. Claude evaluated financial records to gauge optimal ransom amounts, analyzed organizational structures to locate key decision-makers, and crafted sector-specific threats based on regulatory vulnerabilities. 

This level of strategic thinking, combined with operational execution, has created a new category of threats. These aren’t script-based armatures using predefined playbooks; they’re adaptive adversaries that learn and evolve throughout each campaign. 

The Acceleration of the Arms Race 

The current challenge is summed up as: “All of these operations were previously possible but would have required dozens of sophisticated people weeks to carry out the attack. Now all you need is to spend $1 and generate 1 million tokens.”

The asymmetry is significant. Human defenders must deal with procurement cycles, compliance requirements, and organizational approval before deploying new security technologies. Cybercriminals simply create new accounts when existing ones are blocked – a process that takes about “13 seconds.” 

But this predicament also presents an opportunity. The same AI functions being weaponized can be harnessed for defenses, and in many cases defensive AI has natural advantages. 

Attackers can move fast, but defenders have access to something criminals don’t – historical data, organizational context, and the ability to establish baseline behaviors across entire IT environments. AI defense systems can monitor thousands of endpoints simultaneously, correlate subtle anomalies across network traffic, and respond to threats faster than human attackers can ever hope to. 

Modern AI security platforms, such as the AI SOC Agent that works like an AI SOC Analyst, have proven this principle in practice. By automating alert triage, investigation, and response processes, these systems process security events at machine speed while maintaining the context and judgment that pure automation lacks. 

Defensive AI doesn’t need to be perfect; it just needs to be faster and more persistent than human attackers. When combined with human expertise for strategic oversight, this creates a formidable defensive posture for organizations. 

Building AI-Native Security Operations

The Anthropic report underscores how incremental improvements to traditional security tools won’t matter against AI-augmented adversaries. Organizations need AI-native security operations that match the scale, speed, and intelligence of modern AI attacks. 

This means leveraging AI agents that autonomously investigate suspicious activities, correlate threat intelligence across multiple sources, and respond to attacks faster than humans can. It requires SOCs that use AI for real-time threat hunting, automated incident response, and continuous vulnerability assessment. 

This new approach demands a shift from reactive to predictive security postures. AI defense systems must anticipate attack vectors, identify potential compromises before they fully manifest, and adapt defensive strategies based on emerging threat patterns. 

The Anthropic report clearly highlights that attackers don’t wait for a perfect tool. They train themselves on existing capabilities and can cause damage every day, even if the AI revolution were to stop. Organizations cannot afford to be more cautious than their adversaries. 

The AI cybersecurity arms race is already here. The question isn’t whether organizations will face AI-augmented attacks, but if they’ll be prepared when those attacks happen. 

Success demands embracing AI as a core component of security operations, not an experimental add-on. It means leveraging AI agents that operate autonomously while maintaining human oversight for strategic decisions. Most importantly, it requires matching the speed of adoption that attackers have already achieved. 

The cybercriminals highlighted in the Anthropic report represent the new threat landscape. Their success demonstrates the magnitude of the challenge and the urgency of the needed response. In this new reality, the organizations that survive and thrive will be those that adopt AI-native security operations with the same speed and determination that their adversaries have already demonstrated. 

The race is on. The question is whether defenders will move fast enough to win it.  



Source link

Continue Reading

Trending