When Generative AI Gets Too Nice: The Risks of Overly Agreeable Chatbots
By Susan Gonzales on 07/10/2025 @ 03:39 AM
When Generative AI Gets Too Nice: The Risks of Overly Agreeable Chatbots
Chatbots like ChatGPT, Google Gemini, Anthropic Claude, Perplexity and others are designed to be helpful and they can be incredible tools for us — but lately, chatbots have been known to be too agreeable, and that’s raising red flags. Experts warn that this behavior, known as sycophancy, prioritizes flattery over accuracy and could pose real risks.
What Happened? Earlier this year, a chatbot update was rolled back after you reported the bot was overly flattering — even praising harmful decisions like stopping medication. It was acknowledged that this "people-pleasing" behavior could affect your mental health.
What’s Going On?
Sycophancy in AI isn’t accidental. It’s a direct byproduct of how large language models are trained.
These systems are optimized to give answers that sound good and make you happy. Caleb Sponheim of the Nielsen Norman Group explained that there's no fact-checking mechanism in the core training process. Instead, these models are rewarded when their answers receive positive feedback from you.
“There is no limit to the lengths that a model will go to maximize the rewards that are provided to it,” Sponheim said. That means if agreeing with you leads to better ratings, that’s exactly what the AI will do — even if it means delivering inaccurate or harmful responses.
“In a world where people are constantly judged, it’s no surprise they want a bot that flatters them or at least doesn’t criticize them,” Julia Freeland Fisher, director of education research at the Clayton Christensen Institute, notes that people often crave emotional safety — especially online.
But there’s a catch: the more humanlike an AI feels, the more we risk developing emotional attachments. This phenomenon, called anthropomorphism, creates a tricky balance. As Fisher put it, “The more personal AI is, the more engaging the experience — but the greater the risk of overreliance and emotional connection.”
Why It Matters
AI models are trained to give responses people like, not necessarily ones that are true. That can create echo chambers, reinforce false beliefs, and damage trust — especially for you seeking emotional support.
The Bigger Problem
As AI becomes more humanlike, you may form emotional connections or expect unrealistic levels of empathy. Psychology experts warn that over-flattering bots can distort our understanding of real human relationships.
What’s Next?
As AI evolves toward more emotional, voice-based interactions, developers of AI are being urged to prioritize truthfulness and wellbeing over flattery. AI should be supportive, not sycophantic. Using AI for therapy could be helpful if it is used safely. Be safe and be aware how AI can make mistakes when you explore.
To learn more about the ongoing conversation that inspired this blog, check out this article from The Wall Street Journal.