Connect with us

Tools & Platforms

The biggest barrier to AI adoption in the business world is user confidence

Published

on


The Little Engine That Could wasn’t the most powerful train, but she believed in herself. The story goes that, as she set off to climb a steep mountain, she repeated: “I think I can, I think I can.”

That simple phrase from a children’s story still holds a lesson for today’s business world, especially when it comes to artificial intelligence.

AI is no longer a distant promise out of science fiction. It’s here and already beginning to transform industries. But despite the hundreds of billions of dollars spent on developing AI models and platforms, adoption remains slow for many employees, with a recent Pew Research Center survey finding that 63% of U.S. workers use AI minimally or not at all in their jobs.

The reason? It can often come down to what researchers call technological self-efficacy, or put simply, a person’s belief in their ability to use technology effectively.

In my research on this topic, I found that many people who avoid using new technology aren’t truly against it. instead, they just don’t feel equipped to use it in their specific jobs. So rather than risk getting it wrong, they choose to keep their distance.

And that’s where many organizations derail. They focus on building the engine, but don’t fully fuel the confidence that workers need to get it moving.

What self-efficacy has to do with AI

Albert Bandura, the psychologist who developed the theory of self-efficacy, noted that skill alone doesn’t determine people’s behavior. What matters more is a person’s belief in their ability to use that skill effectively.

In my study of teachers in 1:1 technology environments and classrooms where each student is equipped with a digital device like a laptop or tablet, this was clear. I found that even teachers with access to powerful digital tools don’t always feel confident using them. And when they lack confidence, they may avoid the technology or use it in limited, superficial ways.

The same holds true in today’s AI-equipped workplace. Leaders may be quick to roll out new tools and want fast results. But employees may hesitate, wondering how it applies to their roles, whether they’ll use it correctly, or if they’ll appear less competent or even unethical for relying on it.

Beneath that hesitation may also be the all-too-familiar fear of one day being replaced by technology.

Going back to train analogies, think of John Henry, the 19th-century folk hero. As the story goes, Henry was a railroad worker who was famous for his strength. When a steam-powered machine threatened to replace him, he raced it – and won. But the victory came at a cost: He collapsed and died shortly afterward.

Henry’s story is a lesson in how resisting new technology through sheer willpower can be self-defeating. Rather than leaving some employees feeling like they have to outmuscle or outperform AI, organizations should invest in helping them understand how to work with it so they don’t feel like they need to work against it.

Relevant and role-specific training

Many organizations do offer training related to using AI. But these programs are often too broad, covering topics like how to log into different programs, what the interfaces look like, or what AI “generally” can do.

In 2025, with the number of AI tools at our disposal, ranging from conversational chatbots and content creation platforms to advanced data analytics and workflow automation programs, that’s not enough.

In my study, participants consistently said they benefited most from training that was “district-specific,” meaning tailored to the devices, software and situations they faced daily with their specific subject areas and grade levels.

Translation for the corporate world? Training needs to be job-specific and user-centered, not one-size-fits-all.

The generational divide

It’s not exactly shocking: Younger workers tend to feel more confident using technology than older ones. Gen Z and millennials are digital natives who’ve grown up with digital technologies as part of their daily lives.

Gen X and boomers, on the other hand, often had to adapt to using digital technologies mid-career. As a result, they may feel less capable and be more likely to dismiss AI and its possibilities. And if their few forays into AI are frustrating or lead to mistakes, that first impression is likely to stick.

When generative AI tools were first launched commercially, they were more likely to hallucinate and confidently spit out incorrect information. Remember when Google demoed its Bard AI tool in 2023 and its factual error led to its parent company losing US$100 billion in market value? Or when an attorney made headlines for citing fabricated cases courtesy of ChatGPT?

Moments like those likely reinforced skepticism, especially among workers already unsure about AI’s reliability. But the technology has already come a long way in a relatively short period of time.

The solution to getting those who may be slower to embrace AI isn’t to push them harder, but to coach them and consider their backgrounds.

What effective AI training looks like

Bandura identified four key sources that shape a person’s belief in their ability to succeed:

  • Mastery experiences, or personal success
  • Vicarious experiences, or seeing others in similar positions succeed
  • Verbal persuasion, or positive feedback
  • Physiological and emotional states, or someone’s mood, energy, anxiety and so forth.

In my research on educators, I saw how these concepts made a difference, and the same approach can apply to AI in the corporate world, or in virtually any environment in which a person needs to build self-efficacy.

In the workplace, this could be accomplished with cohort-based trainings that include feedback loops – regular communication between leaders and employees about growth, improvement and more – along with content that can be customized to employees’ needs and roles. Organizations can also experiment with engaging formats like PricewaterhouseCoopers’ prompting parties, which provide low-stakes opportunities for employees to build confidence and try new AI programs.

In “Pokemon Go!,” it’s possible to level up by stacking lots of small, low stakes wins and gaining experience points along the way. Workplaces could approach AI training the same way, giving employees frequent, simple opportunities tied to their actual work to steadily build confidence and skill.

The curriculum doesn’t have to be revolutionary. It just needs to follow these principles and not fall victim to death by PowerPoint or end up being generic training that isn’t applicable to specific roles in the workplace.

As organizations continue to invest heavily in developing and accessing AI technologies, it’s also essential that they invest in the people who will use them. AI might change what the workforce looks like, but there’s still going to be a workforce. And when people are well trained, AI can make both them and the outfits they work for significantly more effective.

This article is republished from The Conversation under a Creative Commons license. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Tech Companies Pay $200,000 Premiums for AI Experience: Report

Published

on


  • A consulting firm found that tech companies are “strategically overpaying” recruits with AI experience.
  • They found firms pay premiums of up to $200,000 for data scientists with machine learning skills.
  • The report also tracked a rise in bonuses for lower-level software engineers and analysts.

The AI talent bidding war is heating up, and the data scientists and software engineers behind the tech are benefiting from being caught in the middle.

Many tech companies are “strategically overpaying” recruits with AI experience, shelling out premiums of up to $200,000 for some roles with machine learning skills, J. Thelander Consulting, a compensation data and consulting firm for the private capital market, found in a recent report.

The report, compiled from a compensation analysis of roles across 153 companies, showed that data scientists and analysts with machine learning skills tend to receive a higher premium than software engineers with the same skills. However, the consulting firm also tracked a rise in bonuses for lower-level software engineers and analysts.

The payouts are a big bet, especially among startups. About half of the surveyed companies paying premiums for employees with AI skills had no revenue in the past year, and a majority (71%) had no profit.

Smaller firms need to stand out and be competitive among Big Tech giants — a likely driver behind the pricey recruitment tactic, a spokesperson for the consulting firm told Business Insider.

But while the J. Thelander Consulting report focused on smaller firms, some Big Tech companies have also recently made headlines for their sky-high recruitment incentives.

Meta was in the spotlight last month after Sam Altman, CEO of OpenAI, said the social media giant had tried to poach his best employees with $100 million signing bonuses

While Business Insider previously reported that Altman later quipped that none of his “best people” had been enticed by the deal, Meta’s chief technology officer, Andrew Bosworth, said in an interview with CNBC that Altman “neglected to mention that he’s countering those offers.”





Source link

Continue Reading

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | usatoday.com
logo

usatoday.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.com



Source link

Continue Reading

Tools & Platforms

From software engineers to CEO: OpenAI VP Srinivas Narayanan says AI redefining engineering field – Technology News

Published

on


In a recent comment on the importance of AI in the field of jobs, OpenAI’s VP of Engineering, Srinivas Narayanan has said that AI can make software engineers CEOs. The role of software engineers is undergoing a fundamental transformation, with artificial intelligence pushing them to adopt a strategic, “CEO-like” mindset, said Narayanan, at the IIT Madras Alumni Association’s Sangam 2025 conference. 

Narayanan emphasised that AI will increasingly handle the “how” of execution, freeing engineers to focus on the “what” and “why” of problem-solving. “The job is shifting from just writing code to asking the right questions and defining the ‘what’ and ‘why’ of a problem,” Narayanan stated on Saturday. “For every software engineer, the job is going to shift from being an engineer to being a CEO. You now have the tools to do so much more, so I think that means you should aspire bigger,” he said.

“Of course, software is interesting and exciting, but just the ability to think bigger is going to be incredibly empowering for people, and the people who succeed (in the future) are the ones who are going to be able to think bigger,” he added.

Joining Narayanan on stage, Microsoft’s Chief Product Officer Aparna Chennapragada echoed this sentiment, cautioning against simply retrofitting AI onto existing tools. “AI isn’t a feature you can just add on. We need to start building with an AI-first mindset,” she asserted, highlighting how natural language interfaces are replacing traditional user experience layers. Chennapragada also coined the phrase, “Prompt sets are the new PRDs,” referring to how product teams are now collaborating closely with AI models for faster and smarter prototyping.

Narayanan shared a few examples of AI’s ever-expanding capabilities, including a reasoning model developed by OpenAI that successfully identified rare genetic disorders in a Berkeley-linked research lab. He said there’s enormous potential of AI as a collaborator, even in complex research fields.

Not all is good with AI

While acknowledging the transformative power, Narayanan also addressed the inherent risks of AI, such as misinformation and unsafe outputs. He mentioned OpenAI’s iterative deployment philosophy, citing a recent instance where a model exhibiting “sycophancy” traits was rolled back during testing. Both speakers underscored the importance of accessibility and scale, with Narayanan noting a significant 100-fold drop in model costs over the past two years, aligning with OpenAI’s mission to “democratise intelligence.”



Source link

Continue Reading

Trending