Connect with us

Tools & Platforms

AI Tools Flood Workplaces As Employees Face Mixed Signals

Published

on


AI seems to be everywhere at work these days. Unfortunately, not everyone feels free to admit that they’re using it. A new survey from WalkMe shows nearly half of employees hide their use of AI tools, fearing colleagues or managers will judge them as lazy or less competent. Yet at the same time, many employers are pressuring teams to adopt artificial technology to boost productivity. It’s a paradox that leaves workers feeling stuck and highlights a growing culture of “AI shame” in the workplace.

This tension between corporate pressure and personal discomfort points to a deeper workplace challenge. Employees are eager to experiment with AI tools, but unclear policies and a lack of training fuel secrecy instead of confidence. While U.S. workers hide their AI use to avoid judgment, a study of Chinese students published in the journal Science of Learning suggests the opposite problem. Guilt and obligation push them to use AI tools even when they don’t enjoy them. Together, these findings show that shame, not technology, may be the most significant barrier to AI’s success at work.

Let’s take a closer look at the survey results to see why AI shame matters for companies and how leaders can break the cycle.

AI Shame Is Spreading At Work

AI shame can be defined as the fear of stigma around using AI at work. Instead of being seen as resourceful, employees worry they will be judged for leaning on technology to get things done. The WalkMe survey shows how common this has become. Nearly half of employees (48.8%) say they conceal their use of AI tools. Among executives, the number is even higher, with 53% admitting they hide their reliance on AI.

The secrecy goes beyond a simple lack of disclosure. Some workers admit to passing off AI-generated work as entirely their own. Behind this behavior is a mix of fears:

  • Fear of being labeled as lazy: Workers worry colleagues will see AI use as taking shortcuts
  • Concern about appearing less competent: Employees fear being viewed as less skilled than peers who don’t rely on technology
  • Worry about job security: Many fear being seen as replaceable if they acknowledge using AI

This reluctance to be open about AI use not only affects individual careers but also slows down the ability of companies to share best practices and innovate.

Employees Face A No-Win Situation

While employees are hiding their use of AI, companies are often pushing them to adopt it. Many organizations have rolled out AI tools like ChatGPT, Copilot and Gemini with the expectation that workers will complete tasks faster and more efficiently. Leaders see AI as a way to boost productivity and gain an edge in a competitive market.

For employees, this creates a catch-22. If they embrace AI openly, they risk being seen as lazy, unskilled or even replaceable. If they avoid it, they risk falling behind or being labeled as resistant to change. The result is a paradox that leaves workers feeling trapped.

This situation plays out across every level of the organization:

  • Entry-level employees worry about proving themselves without looking like they are cutting corners
  • Managers struggle to balance efficiency with credibility
  • Executives admit to hiding their reliance on AI, sending mixed signals to the teams they lead

China Experiences AI Shame Differently

In China, the experience of AI shame is reversed. Research shows that many university students feel guilt and an obligation to use AI tools, even when they don’t want to. The study found that guilt-driven motivation was central to students’ decisions about whether to use AI. In a culture that emphasizes saving face and meeting expectations, students were less motivated by curiosity or enjoyment and more by fear of letting others down. Competence mattered more than autonomy or personal choice.

While the context is different, the underlying theme is the same. Social pressure, whether it takes the form of hiding in the U.S. or guilt in China, can distort how people approach AI. Instead of using it with confidence, individuals are driven by fear. That suggests shame, in all its forms, may be one of the biggest barriers to meaningful adoption.

AI Shame Creates Ripple Effects

AI shame is more than an employee concern. It creates ripple effects that weaken productivity, collaboration and innovation across the workplace:

  • Reduced knowledge sharing: Employees are less likely to share AI strategies that could help colleagues work smarter
  • Difficulty scaling best practices: Secrecy makes it harder for organizations to identify and scale practical approaches
  • Productivity paradox: Workers believe AI can improve performance, yet tools often slow them down due to uncertainty about proper usage
  • Shadow AI risks: Employees turn to unsanctioned tools when unclear about what’s allowed, raising data security and compliance concerns
  • Eroded trust: Hidden AI use can damage the relationship between employees and management

Without clear training and guidance, employees are left to experiment on their own, which often leads to frustration and missed opportunities.

Leaders Can Break The Cycle

Leaders play a critical role in shifting how employees view AI. Here are three key strategies:

  • Normalize AI use: Frame AI as a workplace skill rather than a shortcut. When leaders model transparency about their own AI use, they signal that the technology enhances performance rather than indicating weakness
  • Invest in training and clear policies: The survey shows that only a small share of workers receive meaningful AI guidance. Companies that provide hands-on training and clear policies help employees feel confident instead of secretive
  • Highlight positive examples: Share stories of how AI has improved productivity or freed up time for strategic work. Creating an environment where experimentation is encouraged replaces fear with confidence

By implementing these approaches, organizations can unlock the full potential of AI tools and transform workplace culture around technology adoption.

AI isn’t the real barrier to progress in the workplace. The deeper challenge is the shame and confusion that surround its use. Employees are caught in a no-win situation, pressured to adopt AI tools yet reluctant to admit it. Unless leaders address that tension with clear policies, training and openness, companies risk losing the very gains they hope to achieve. The future of AI at work will be shaped not only by the technology itself but by whether organizations create a culture where people use it with confidence instead of fear.


Frustrated by a corporate environment that lacks clear direction or flexibility? Subscribe to Corporate Escape Artist for strategies to design a career on your terms.



Source link

Tools & Platforms

Could gen AI radically change the power of the SLA?

Published

on


Clorox’s lawsuit cites transcripts of help desk calls as evidence of Cognizant’s negligence, but what if those calls been captured, transcribed, and analyzed to send real-time alerts to Clorox management? Could the problem behavior have been discovered early enough to thwart the breach?

Here, generative AI could have a significant impact, as it delivers the capability to capture information from a wide range of communication channels — potentially actions as well via video — and analyze for deviations from what a company has been contracted to deliver. This could deliver near-real-time alerts regarding problematic behavior in a way that could spur a rethinking of the SLA as it is currently practiced. 

“This is flipping the whole idea of SLA,” said Kevin Hall, CIO for the Westconsin Credit Union, which has 129,000 members throughout Wisconsin and Minnesota. “You can now have quality of service rather than just performance metrics.”



Source link

Continue Reading

Tools & Platforms

Box’s new AI features help unlock dormant data – Computerworld

Published

on


AI provides a technique to extract value from this untapped resource, said Ben Kus, chief technology officer at Box. To use the widely scattered data properly requires preparation, organization, and interpretation to make sure it is applied accurately, Kus said.

Box Extract uses reasoning to dig deep and extract relevant information. The AI technology ingests the data, reasons and extracts context, matches patterns, reorganizes the information by placing it in fields, and then draws correlations from the new structure. In a way, it restructures unstructured data with smarter analysis by AI.

“Unstructured data is cool again. All of a sudden it’s not just about making it available in the cloud, securing it, or collaboration, but it’s about doing all that and AI,” Kus said.



Source link

Continue Reading

Tools & Platforms

CoreWeave scales rapidly to meet AI growth

Published

on


Despite short-term stock pressure, CoreWeave remains positioned to meet overwhelming AI compute demand, supporting long-term optimism in the sector.

Nvidia-backed CoreWeave says peak AI investment is still far off, as demand for compute capacity from OpenAI, hyperscalers, enterprises, and governments continues to surge. CEO Michael Intrator said CoreWeave is rapidly scaling to meet soaring global GPU demand.

CoreWeave shares have fallen around 20% despite strong market interest over the past month. The decline follows a higher-than-expected Q2 net loss, $1 billion in capital expenditure, and a projected $500 million this quarter, raising debt concerns.

Since the IPO lockup expiry, insider stock sales have added to the downward pressure.

Intrator defended the company’s strategy, describing debt as the most efficient way to fund growth. Analysts warn CoreWeave shares could stay volatile, though strong AI infrastructure demand supports long-term optimism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot



Source link

Continue Reading

Trending