Connect with us

AI Research

Why soft skills matter more than ever in the age of AI

Published

on


From robotics on factory assembly lines to ChatGPT, artificial intelligence (AI) is as prevalent in major industries as it is on our smartphones. From some perspectives, that expansion is revolutionary; recent studies have found that AI has the potential to provide more accurate medical diagnoses and help make sense of complex and unwieldy data.  

But AI is lacking in one critical workplace quality: soft skills. 

“Soft skills are highly transferable skills that power most of our day-to-day interactions — things like collaboration, communication, creativity and the ability to learn,” says Madeline Mann, a human resources and career strategist.

This aligns with the U.S. Department of Labor’s findings that emotional intelligence at work, such as teamwork, communication, critical thinking and professionalism, are now essential and precisely the areas where artificial intelligence falls short.

Here’s why soft skills matter more than ever for the future of work and how they may be the real differentiator in your next job search or promotion.

Why soft skills matter in today’s job market

If soft skills involve things like empathy and communication, hard skills are measurable abilities — such as data analysis, coding or technical writing — typically acquired through training or education. Those things are essential, of course, but they serve as a baseline. 

Mann uses a doctor as an example. “The doctors who are most appreciated and have the lowest rate of litigation have great bedside manner. That’s soft skills,” she explains. “Most people don’t know where their doctor went to school, but they do remember how that doctor made them feel.”

According to 2023 research from the science journal Heliyon, even in tech fields like engineering or logistics, more than 40% of all skills required by employers are skills AI can’t replace, including critical and analytical thinking, problem-solving and flexibility.

Mann says it’s the same for any career. “Soft skills shape how people experience you, and that can be the edge that sets you apart,” she notes. 

How to showcase soft skills to hiring managers

So, how do you demonstrate these skills when it counts?

In an interview, you won’t necessarily mention the soft skills you possess, but you can demonstrate them to a hiring manager. The key is to prepare examples from your past roles that show your character.

“Instead of just saying you launched a campaign that increased app downloads, explain your thought process: How did you come up with the idea? How did you get others on board? Did you have to collaborate across departments, navigate cultural dynamics or adjust on the fly when budget or timing shifted?” Mann says.

Those small details that demonstrate your ability to communicate and be flexible will make you stand out.

“People land interviews because of their hard skills, but they land jobs and promotions because of their soft skills,” she says.

AI can’t replace relationship-building

Among all soft skills, the ability to build genuine relationships stands out as especially irreplaceable in an AI-driven world. Even when teams are fully remote, the workplace remains a social community, not just a network of tasks.

Employees who can forge genuine connections, collaborate across departments and leverage emotional intelligence are becoming indispensable.

“We’re entering a phase where personalization is rare, and authenticity is craved,” Mann explains.

Her observations have borne out. A study in the Journal of Vocational Behavior highlighted that professionals who engage in networking (especially via platforms like LinkedIn) see better promotions, higher compensation and greater career satisfaction.

In other words, networking isn’t just a buzzword; it’s a relationship-building exercise that machines can’t mimic, and it can directly impact your upward mobility and help set you up for a promotion

How soft skills can fast-track your promotion

Promotions rely heavily on AI-proof skills, Mann notes. It’s not just about doing great work; it’s about making sure the right people see it. That’s why building strong relationships with your manager, teammates and colleagues across departments is essential.

According to LinkedIn’s 2025 Workplace Trends Report, managers say soft skills are equally — if not more — important than hard skills. Communication has consistently ranked among the top skills employers seek and was the most in-demand skill in 2024.

To raise your profile, Mann recommends staying connected with colleagues across the organization and paying attention to their needs. This awareness allows you to step in on high-impact projects, often before you’re even asked.

“The goal is to keep raising your value through relationships, visibility and contribution. The more people who see you as valuable and easy to work with, the more likely they are to advocate for what you want in your role,” says Mann.

The human edge 

As AI continues to reshape what jobs look like, the human edge will come from what machines still can’t do: build trust, read the room and rally a team. “So yes, master your craft,” Mann says. “But also cultivate likability, strong communication and collaboration to have a successful career.”

In other words, the most future-proof skill might just be your humanity.

What is USA TODAY Top Workplaces 2025?

If you’re looking for a job where soft skills are rewarded, we can help. Each year, USA TODAY Top Workplaces, a collaboration between Energage and USA TODAY, ranks organizations across the U.S. that excel at creating a positive work environment for their employees. Employee feedback determines the winners.

In 2025, over 1,500 companies earned recognition as top workplaces. Check out our overall U.S. rankings. You can also gain insights into top-ranked regional employers by checking out the links below.



Source link

AI Research

Google’s open MedGemma AI models could transform healthcare

Published

on


Instead of keeping their new MedGemma AI models locked behind expensive APIs, Google will hand these powerful tools to healthcare developers.

The new arrivals are called MedGemma 27B Multimodal and MedSigLIP and they’re part of Google’s growing collection of open-source healthcare AI models. What makes these special isn’t just their technical prowess, but the fact that hospitals, researchers, and developers can download them, modify them, and run them however they see fit.

Google’s AI meets real healthcare

The flagship MedGemma 27B model doesn’t just read medical text like previous versions did; it can actually “look” at medical images and understand what it’s seeing. Whether it’s chest X-rays, pathology slides, or patient records potentially spanning months or years, it can process all of this information together, much like a doctor would.

The performance figures are quite impressive. When tested on MedQA, a standard medical knowledge benchmark, the 27B text model scored 87.7%. That puts it within spitting distance of much larger, more expensive models whilst costing about a tenth as much to run. For cash-strapped healthcare systems, that’s potentially transformative.

The smaller sibling, MedGemma 4B, might be more modest in size but it’s no slouch. Despite being tiny by modern AI standards, it scored 64.4% on the same tests, making it one of the best performers in its weight class. More importantly, when US board-certified radiologists reviewed chest X-ray reports it had written, they deemed 81% accurate enough to guide actual patient care.

MedSigLIP: A featherweight powerhouse

Alongside these generative AI models, Google has released MedSigLIP. At just 400 million parameters, it’s practically featherweight compared to today’s AI giants, but it’s been specifically trained to understand medical images in ways that general-purpose models cannot.

This little powerhouse has been fed a diet of chest X-rays, tissue samples, skin condition photos, and eye scans. The result? It can spot patterns and features that matter in medical contexts whilst still handling everyday images perfectly well.

MedSigLIP creates a bridge between images and text. Show it a chest X-ray, and ask it to find similar cases in a database, and it’ll understand not just visual similarities but medical significance too.

Healthcare professionals are putting Google’s AI models to work

The proof of any AI tool lies in whether real professionals actually want to use it. Early reports suggest doctors and healthcare companies are excited about what these models can do.

DeepHealth in Massachusetts has been testing MedSigLIP for chest X-ray analysis. They’re finding it helps spot potential problems that might otherwise be missed, acting as a safety net for overworked radiologists. Meanwhile, at Chang Gung Memorial Hospital in Taiwan, researchers have discovered that MedGemma works with traditional Chinese medical texts and answers staff questions with high accuracy.

Tap Health in India has highlighted something crucial about MedGemma’s reliability. Unlike general-purpose AI that might hallucinate medical facts, MedGemma seems to understand when clinical context matters. It’s the difference between a chatbot that sounds medical and one that actually thinks medically.

Why open-sourcing the AI models is critical in healthcare

Beyond generosity, Google’s decision to make these models is also strategic. Healthcare has unique requirements that standard AI services can’t always meet. Hospitals need to know their patient data isn’t leaving their premises. Research institutions need models that won’t suddenly change behaviour without warning. Developers need the freedom to fine-tune for very specific medical tasks.

By open-sourcing the AI models, Google has addressed these concerns with healthcare deployments. A hospital can run MedGemma on their own servers, modify it for their specific needs, and trust that it’ll behave consistently over time. For medical applications where reproducibility is crucial, this stability is invaluable.

However, Google has been careful to emphasise that these models aren’t ready to replace doctors. They’re tools that require human oversight, clinical correlation, and proper validation before any real-world deployment. The outputs need checking, the recommendations need verifying, and the decisions still rest with qualified medical professionals.

This cautious approach makes sense. Even with impressive benchmark scores, medical AI can still make mistakes, particularly when dealing with unusual cases or edge scenarios. The models excel at processing information and spotting patterns, but they can’t replace the judgment, experience, and ethical responsibility that human doctors bring.

What’s exciting about this release isn’t just the immediate capabilities, but what it enables. Smaller hospitals that couldn’t afford expensive AI services can now access cutting-edge technology. Researchers in developing countries can build specialised tools for local health challenges. Medical schools can teach students using AI that actually understands medicine.

The models are designed to run on single graphics cards, with the smaller versions even adaptable for mobile devices. This accessibility opens doors for point-of-care AI applications in places where high-end computing infrastructure simply doesn’t exist.

As healthcare continues grappling with staff shortages, increasing patient loads, and the need for more efficient workflows, AI tools like Google’s MedGemma could provide some much-needed relief. Not by replacing human expertise, but by amplifying it and making it more accessible where it’s needed most.

(Photo by Owen Beard)

See also: Tencent improves testing creative AI models with new benchmark

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Continue Reading

AI Research

Pope: AI development must build bridges of dialogue and promote fraternity

Published

on


In a message signed by the Cardinal Secretary of State Pietro Parolin, to the United Nations’ AI for Good Summit happening in Geneva, Pope Leo XIV encourages nations to create frameworks and regulations to work for the common good.

By Isabella H. de Carvalho

Pope Leo XIV encouraged nations to establish frameworks and regulations on AI so that it can be developed and used according to the common good, in a message sent on July 10 to the participants of the AI for Good Summit, taking place in Geneva, Switzerland, from July 8 to 11.  

“I would like to take this opportunity to encourage you to seek ethical clarity and to establish a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person”, the message, signed by the Secretary of State, Cardinal Pietro Parolin, said.

The summit is organized by the United Nations’ International Telecommunication Union (ITU) and is co-hosted by the Swiss government. The event sees the participation of governments, tech leaders, academics and others who are interested and work with AI.

In this “era of profound innovation” where many are reflecting on “what it means to be human”, the world “is at crossroads, facing the immense potential generated by the digital revolution driven by Artificial Intelligence”, the Pope highlighted in his message. 

AI requires ethical management and regulatory frameworks 

“As AI becomes capable of adapting autonomously to many situations by making purely technical algorithmic choices, it is crucial to consider its anthropological and ethical implications, the values at stake and the duties and regulatory frameworks required to uphold those values”, the Pope underlined in his message. 

He emphasized that the “responsibility for the ethical use of AI systems begins with those who develop, manage and oversee them” but users also need to share this mission. AI “requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency,” the Pope insisted. 

Building peaceful societies 

Citing St. Augustine’s concept of the “tranquility of order”, Pope Leo highlighted that this should be the common goal and thus AI should foster “more human order of social relations” and “peaceful and just societies in the service of integral human development and the good of the human family”. 

While AI can simulate human reasoning and perform tasks quickly and efficiently or transform areas such as “education, work, art, healthcare, governance, the military, and communication”, “it cannot replicate moral discernment or the ability to form genuine relationships”, Pope Leo warned. 

For him the development of this technology “must go hand in hand with respect for human and social values, the capacity to judge with a clear conscience, and growth in human responsibility”. It requires “discernment to ensure that AI is developed and utilized for the common good, building bridges of dialogue and fostering fraternity”, the Pope urged. AI needs to serve “the interests of humanity as a whole”.



Source link

Continue Reading

AI Research

AI slows down some experienced software developers, study finds

Published

on


By Anna Tong

SAN FRANCISCO (Reuters) -Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found.

AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.

The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”

The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.

AI is also expected to replace entry-level coding positions. Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years.

Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56%, another study found developers were able to complete 26% more tasks in a given time.

But the new METR study shows that those gains don’t apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown.

Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study’s authors said.

The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested.

“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what’s needed,” Becker said.

The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren’t familiar with.

Still, the majority of the study’s participants, as well as the study’s authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page.

“Developers have goals other than completing the task as soon as possible,” Becker said. “So they’re going with this less effortful route.”

(Reporting by Anna Tong in San Francisco; Editing by Sonali Paul)



Source link

Continue Reading

Trending