Connect with us

Tools & Platforms

AI Hiring Tools Leave Tech Professionals Frustrated, Distrustful

Published

on


AI-driven hiring tools are reshaping how technology professionals search for jobs, but not in ways that inspire confidence.

A Dice survey of more than 200 tech workers found widespread frustration with automated screening, with many saying the process favors keyword gaming over real qualifications and leaves them feeling dehumanized.

Most respondents said they believe AI systems regularly miss qualified candidates who don’t tailor resumes with the “right” keywords.

That has pushed many professionals to alter or even strip down their resumes to improve compatibility with automated tools, often removing details about personality or accomplishments.

Nearly eight in ten said they feel pressured to exaggerate their qualifications just to get noticed.

For job seekers, the flaws are tangible: Some noted that correctly spelling the name of a software tool could hurt their chances if the AI system expects a common typo. Others said the platforms fail to recognize transferable skills, a serious limitation in a field where adaptability is often critical.

Jonathan Kestenbaum, managing director for tech strategy and partners at AMS, says to pass AI screening systems, IT professionals should tailor their resumes to include context and keyword-rich language.

“They should also incorporate descriptions of their technical achievements that showcases the candidate’s ability to harness the power of AI and machine learning,” he adds.

Fadl Al Tarzi, CEO and cofounder of Nexford University, says if a resume isn’t optimized with the right language, it may never make it past the first filter.

“That means listing ‘AI’ isn’t enough. You need to show how you’ve used it,” he says.

He suggests translating technical projects into real-world outcomes powered by AI.

“Done well, you’re not just checking a box for an algorithm—you’re making the case to the human being behind the screen,” Al Tarzi says.

Kestenbaum says senior technologists, with decades of domain knowledge, can still demonstrate value in a hiring process that seems to increasingly reward those who “game the system” rather than demonstrate proven expertise.

“Seasoned IT pros can showcase their measurable outcomes and align their deep domain knowledge beyond surface-level metrics in the hiring process,” he explains.

By reflecting their ability to adapt and harness the power of AI-driven analytics and machine learning, they can demonstrate how they have served as force multipliers using their full skillset to build scalable systems, increase efficiency or reduce cost.

Meanwhile, trust in the process is faltering. Most of those surveyed expressed worry that no human ever sees their application, while others expressed concern that algorithmic bias is reinforcing existing inequities in the workforce.

Kestenbaum said talent acquisition teams must leverage AI ethically in their recruiting and hiring processes.

“While AI digests a huge amount of information to drive efficiency, talent leaders still play a central role in building relationships and making final hiring decisions, ensuring the human touch is at the center of talent acquisition and management,” he says.

Although AI dramatically drives efficiency in talent acquisition and management by streamlining processes, enhancing decision-making, and improving overall outcomes, with the new technology, there is an even greater need to recognize soft skills to propel innovation in the IT sector.

“Filtering out diverse or unconventional candidates in the hiring process can lead to setbacks on innovation as well as lead to IT organizations missing out on value-adding perspectives that can drive growth and efficiency,” Kestenbaum says.

He recommends HR leaders leverage AI but recognize AI’s limitations – whether it’s bias or outdated data – and the importance of putting human and AI collaboration at the organization’s core.

Al Tarzi cautions when hiring systems reduce candidates to keyword matches, the result isn’t efficiency, it’s missed potential.

“Tech teams that reward compliance over creativity will end up with echo chambers rather than breakthroughs,” he says.

From his perspective, bias in hiring isn’t theoretical—it’s already built into the signals managers reward.

“AI-driven hiring tools trained on preferences from universities with AI-integrated coursework preferences will amplify the problem,” he says.

This advantaging of “AI-forward” universities while overlooking self-taught professionals and career switchers is a structural cause of bias in hiring.

“Bias is shifting from demographics to access, and it’s no less damaging,” Al Tarzi says. “Tech leaders need to spot this and act.”

Experiences varied across demographics: early-career and highly experienced candidates reported the highest levels of distrust, women were more likely than men to reshape resumes for AI filters, and mid-career professionals expressed slightly greater tolerance of the systems.

The consequences extend beyond frustration. Three in ten respondents said they are considering leaving the industry altogether, citing the hiring process as a major factor. Many described their experiences with AI-driven screening as “hopeless” and “dehumanizing.”

For employers, the findings highlight a risk that talent pipelines will shrink as disillusioned professionals disengage or depart. At the same time, organizations may be missing out on candidates whose skills and creativity don’t align neatly with algorithmic models.

“If we misuse AI in hiring, we risk splitting the workforce—those who game the system and those who give up on it,” Al Tarzi says. “That divide weakens not just careers, but the long-term health of the industry.”

He says if AI-driven systems feel restrictive or unfair, great candidates will lose trust in the process.

“The consequences are lower retention, weaker diversity, and diminished confidence in the industry itself,” he warns.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI engineers are being deployed as consultants and getting paid $900 per hour

Published

on


AI engineers are being paid a premium to work as consultants to help large companies troubleshoot, adopt, and integrate AI with enterprise data—something traditional consultants may not be able to do.

PromptQL, an enterprise AI platform created by San Francisco-based developer tooling company Hasura, is doling out $900-per-hour wages to its engineers tasked with building and deploying AI agents to analyze internal company data using large language models (LLMs).

The price point reflects the “intuition” and technical skills needed to keep pace with a rapidly-changing technology, Tanmai Gopal, PromptQL’s cofounder and CEO, told Fortune

Gopal said the company hourly wage for AI engineers as consultants is “aligned with the going rate that you would see for AI engineers,” but that “it feels like we should be increasing that price even more,” as customers aren’t pushing back on the price PromptQL sets.

“MBA types… are very strategic thinkers, and they’re smart people, but they don’t have an intuition for what AI can do,” Gopal said.

Gopal declined to disclose any customers that have used PromptQL to integrate AI into their businesses, but says the list includes “the largest networking company” as well as top fast food, e-commerce, grocery and food delivery tech companies, and “one of the largest B2B companies.”

Oana Iordăchescu, founder of Deep Tech Recruitment, a boutique agency focused on AI, quantum, and frontier tech talent, told Fortune enterprises and startups are competing for senior AI engineers at “unprecedented rates,” and which is leading to wage inflation.

Iordăchescu said the wages are priced “far above even Big Four consulting partners,” who often make around $400 to $600 per hour.

“Traditional management consultants can design AI strategies, but most lack the hands-on technical expertise to debug models, build pipelines, or integrate systems into legacy infrastructure,” Iordăchescu said. “AI engineers working as consultants bridge that gap. They don’t just advise, they execute.”

AI consultant Rob Howard told Fortune he wasn’t surprised at “mind-blowing numbers” like a $900-per-hour wage for AI consulting work, as he’s seen a price premium on projects that have an AI component while companies rush to adopt it into their businesses.

Howard, who is also the CEO Innovating with AI, a program to teach people to become AI consultants in their own right, said some students of his have sold AI trainings or two-day boot camps that net out to $400 or $500 per hour.

“The pricing for this is high in general across the market, because it’s in demand and new and relatively rare to find, you know, people who are qualified to do it,” Howard said.

A recent report published by MIT’s NANDA initiative, revealed that while generative AI holds promise for enterprises, 95% of initiatives to drive rapid revenue growth failed. Aditya Challapally, the lead author of the report and a research contributor to project NANDA at MIT, previously told Fortune the AI pilot program failures did not fall on the quality of the AI models, but the “learning gap” for both tools and organizations.

“Some large companies’ pilots and younger startups are really excelling with generative AI,” Challapally told Fortune earlier this month. Startups led by 19- or 20-year-olds, for example, “have seen revenues jump from zero to $20 million in a year,” he said. 

“It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.

Jim Johson, an AI consulting executive at AnswerRocket, told Fortune the $900-per-hour wage “makes perfect sense” when considering companies have spent two years experimenting with AI and “have little to show for it.” 

“Now the pressure’s on to demonstrate real progress, and they’re discovering there’s no easy button for enterprise AI,” Johnson said. “This premium won’t last forever, but right now companies are essentially buying insurance against joining that 95% failure statistic.”

Gopal said PromptQL’s business model to have AI engineers serve as both consultants and forward deployed engineers (FDEs)—hybrid sales and engineering jobs tasked with integrating AI solutions—is what makes their employees so valuable.

This new wave of AI engineer consultants is shaking up the consulting industry, Gopal said. But he sees his company as helping shift traditional consulting partnership expectations and culture. 

“The demand is there,” he said. “I think what makes it hard is that leaders, especially in some of the established companies… are kind of more used to the traditional style of consultants.” 

Gopal said the challenge for his company will be to “drive that leadership and education, and saying, ‘Folks, there is a new way of doing things.’”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

Tools & Platforms

ChatGPT causes outrage after it refuses to do one task for users

Published

on


As the days go by, ChatGPT is becoming more and more of a useful tool for humans.

Whether it be asking for information on a subject, help in drafting an email, or even opinions on fashion choices, AI is becoming an essential part of some people’s lifestyles in 2025.

People are having genuine conversations with programmes like ChatGPT for advice on situations in life, and while it answers almost anything you ask it, there’s one question it refuses to answer, for some bizarre reason.

The popular AI chatbot’s capabilities seemed endless, but this seemingly newly found barrier has driven people crazy on social media, who don’t understand why it says no to answering this one question.

To nobody’s surprise, this confusion online has stemmed from a viral TikTok video.

AI is heavily relied upon by some (Getty Stock Image)

All the user did was demand for ChatGPT to count to a million – but how did the chatbot respond?

“I know you just won that counting, but the truth is counting all the way to a million would literally take days,” it replied.

While he kept insisting, the bot kept turning the request down, with the voice saying that it ‘isn’t really practical’, ‘even for me’.

The hilarious exchange included the bot saying that it’s ‘not really possible’ either, saying that it simply won’t be able to carry the prompt out for him.

Replies included the bot repeatedly saying that it ‘understood’ and ‘heard’ what he was saying, but his frustrations grew as the clip went on.

Many have now questioned why this might be the case, as one wrote in response to the user’s anger: “I don’t even use ChatGPT and I’ll say this is a win for them. AIs should not be enablers of abusive behaviour in their users.”

Another posted: “So AI does have limits?! Or maybe it’s just going through a rough day at the office. Too many GenZ are asking about Excel and saving Word documents.”

A third claimed: “I think it’s good that AI can identify and ignore stupid time-sink requests that serve no purpose.”

ChatGPT will not count to a million (Samuel Boivin/NurPhoto via Getty Images)

ChatGPT will not count to a million (Samuel Boivin/NurPhoto via Getty Images)

Others joked that the man would be first to be targeted in an AI-uprising, while some suggested that the programme might need a higher subscription plan to count to a million.

“What it really wanted to say is the amount of time you require is higher than your subscription,” a different user said.

As long as you don’t ask the bot to count to ridiculous numbers, it looks like it can help with more or less anything else.



Source link

Continue Reading

Tools & Platforms

AI is introducing new risks in biotechnology. It can undermine trust in science

Published

on


The bioeconomy is entering a defining moment. Advances in biotechnology, artificial intelligence (AI) and global collaboration are opening new frontiers in health, agriculture and climate solutions. Within reach are safe and effective vaccines and therapeutics, developed within days of a new outbreak, precision diagnostics that can be deployed anywhere and bio-based materials that replace fossil fuels.

But alongside these breakthroughs lies a challenge: the very tools that accelerate discovery can also introduce new risks of accidental release or deliberate misuse of biological agents, technologies and knowledge. Left unchecked, these risks could undermine trust in science and slow progress at a time when the world most needs solutions.

The question is not whether biotechnology will reshape our societies: it already is. The question is whether we can build a bioeconomy that is responsibly safeguarded, inclusive and resilient.

The promise and the risk

AI is transforming biotechnology at a remarkable speed. Machine learning models and biological design tools can identify promising vaccine candidates, design novel therapeutic molecules and optimize clinical trials, regulatory submissions and manufacturing processes – all in a fraction of the time it once took. These advances are essential for achieving ambitious goals such as the 100 Days Mission, the effort to compress vaccine development timelines in response to future emergent pandemics within 100 days, enabled by AI-driven tools and technologies.

The stakes extend beyond security. Without equitable access to AI-driven tools, low- and middle-income countries risk falling behind in innovation and preparedness. Without distributed infrastructure, inclusive training datasets, skilled personnel and role models, the benefits of the bioeconomy could remain concentrated in a few regions, perpetuating inequities in health security, technological opportunity and scientific progress.

Building a culture of responsibility

Technology alone cannot solve these challenges. What is required is a culture of responsibility embedded across the entire innovation ecosystem, from scientists and startups to policymakers, funders and publishers.

This culture is beginning to take shape. Some research institutions are integrating biosecurity into operational planning and training. Community-led initiatives are emerging to embed biosafety and biosecurity awareness into everyday laboratory practices. International bodies are responding as well: in 2024, the World Health Organization adopted a resolution to strengthen laboratory biological risk management, underscoring the importance of safe and secure practices amid rapid scientific progress.

The Global South is leading the way in practice. Rwanda, for instance, responded rapidly to a Marburg virus outbreak in 2024 by integrating biosecurity into national health security strategies and collaborating with global partners. Such exemplars demonstrate that with political will and the right systems in place, emerging innovation ecosystems play leadership roles in protecting communities and enabling safe participation in the global bioeconomy.

Why inclusion and equity matter

Safeguarding the bioeconomy is not only about biosecurity; it is also about inclusion. If only a handful of countries shape the rules, control the infrastructure and train the talent, innovation will remain unevenly distributed and risks will multiply.

That is why expanding AI and biotechnology capacity globally is so urgent. Distributed cloud infrastructure, diverse training datasets and inclusive training programmes can help ensure that all regions are equipped to participate. Diverse perspectives from scientists, regulators and civil society, across the Global South and Global North, are essential to evaluating risks and identifying solutions that are fair, secure and effective.

Equity is also a matter of resilience. A pandemic that spreads quickly will not wait for producer countries to supply vaccines and treatments. A bioeconomy that works for all must empower all to respond.

The way forward

The World Economic Forum, alongside partners such as CEPI and IBBIS, continues to bring together leaders from science, industry and civil society to mobilize collective action on these issues. At this year’s BIO convention, for example, a group of senior health and biosecurity leaders from industry and civil society met to discuss the foundational importance of biosecurity and biosafety for life science, to future-proof preparedness and innovation ecosystems for tomorrow’s global bioeconomy and to achieve the 100 Days Mission.

The bioeconomy stands at a crossroads. On one path, innovation accelerates solutions to humanity’s greatest challenges: pandemics, climate change and food security. On the other path, the same innovations, unmanaged, could deepen inequities and expose society to new vulnerabilities.

The choice is ours. By embedding responsibility, biosecurity and inclusive governance into today’s breakthroughs, we can secure the foundation of tomorrow’s bioeconomy.

But responsibility cannot rest with a few institutions alone. Building a secure and equitable bioeconomy requires a shared commitment across regions, sectors and disciplines.

The bioeconomy’s potential is immense. Realizing it safely will depend on the choices made now. Choices that determine not just how we innovate, but how we safeguard humanity’s future.

This article is republished from World Economic Forum under a Creative Commons license. Read the original article.



Source link

Continue Reading

Trending