Secret chatbot use causes workplace rifts
May 29, 2025
Why workers say they hide their AI use
More employees are using generative AI at work and many are keeping it a secret. Why it matters: Absent clear policies, workers are taking an "ask forgiveness, not permission" approach to chatbots, risking workplace friction and costly mistakes.
The big picture: Secret genAI use proliferates when companies lack clear guidelines, because favorite tools are banned or because employees want a competitive edge over coworkers.
- Fear plays a big part too — fear of being judged and fear that using the tool will make it look like they can be replaced by it.
By the numbers: 42% of office workers use genAI tools like ChatGPT at work and 1 in 3 of those workers say they keep the use secret, according to research out this month from security software company Ivanti.
- A McKinsey report from January showed that employees are using genAI for significantly more of their work than their leaders think they are.
- 20% of employees report secretly using AI during job interviews, according to a Blind survey of 3,617 U.S. professionals.
Catch up quick: When ChatGPT first wowed workers over two years ago, companies were unprepared and worried about confidential business information leaking into the tool, so they preached genAI abstinence.
- Now the big AI firms offer enterprise products that can protect IP, and leaders are paying for those bespoke tools and pushing hard for their employees to use them.
- The blanket bans are gone, but the stigma remains.
Zoom in: New research backs up workers' fear of the optics around using AI for work.
- A recent study from Duke University found that those who use genAI "face negative judgments about their competence and motivation from others."
Yes, but: The Duke study also found that workers who use AI more frequently are less likely to perceive potential job candidates as lazy if they use AI.
Zoom out: The stigma around genAI can lead to a raft of problems, including the use of unauthorized tools, known as "shadow AI" or BYOAI (bring your own AI).
- Research from cyber firm Prompt Security found that 65% of employees using ChatGPT rely on its free tier, where data can be used to train models.
- Shadow AI can also hinder collaboration. Wharton professor and AI expert Ethan Mollick calls workers using genAI for individual productivity "secret cyborgs" who keep all their tricks to themselves.
- "The real risk isn't that people are using AI — it's pretending they're not," Amit Bendov, co-founder and CEO of Gong, an AI platform that analyzes customer interactions, told Axios in an email.
Between the lines: Employees will use AI regardless of whether there's a policy,says Coursera's chief learning officer, Trena Minudri.
- Leaders should focus on training, she argues. (Coursera sells training courses to businesses.)
- Workers also need a "space to experiment safely," Minudri told Axios in an email.
The tech is changing so fast that leaders need to acknowledge that workplace guidelines are fluid.
- Vague platitudes like "always keep a human in the loop" aren't useful if workers don't understand what the loop is or where they fit into it.
- GenAI continues to struggle with accuracy, and companies risk embarrassing gaffes, or worse, when unchecked AI-generated content goes public.
- Clearly communicating these issues can go a long way in helping employees feel more comfortable opening up about their AI use, Atlassian CTO Rajeev Rajan told Axios.
- "Our research tells us that leadership plays a big role in setting the tone for creating a culture that fosters AI experimentation," Rajan said in an email. "Be honest about the gaps that still exist."
The bottom line: Encouraging workers to use AI collaboratively could go a long way to ending the secrecy.
- Generative AI works best when it's combined with human intelligence, says Elliot Katz, co-founder of mixus.ai, a collaborative AI business platform.
- "One person's dirty little secret," Katz told Axios in an email, "can be a tool that teams are excited to use together daily."
Click here to explore this article's original source for more.