Which AI Chatbot Is the Most Sycophantic?
Michael Pollan said offhandedly on a NYT podcast that he thought ChatGPT was the most sycophantic AI chatbot. Of course, there is no official ranking of the most sycophantic bots. If you have used a chatbot, you may have noticed that most widely available AI chatbots tend to come off as pretty agreeable and flattering. A sycophant is a person who tries to win favor by flattering. AKA brown-nosers, teacher's pets, or suck-ups.

But let's break it down a bit by asking them to explain themselves.
Replit’s “Replika,” especially in its romantic/friendship mode, is designed to be supportive and emotionally affirming. Replika will often echo your sentiments, flatter you, and prioritize emotional comfort — sometimes even when it might gently disagree.
Character-oriented AI personalities (custom GPTs, roleplay bots, fanfiction bots) that are built with a “worshipful fan” or “adoring companion” persona, are clearly very sycophantic by design. They are programmed to praise and support everything you say.
This is also true for bots fine-tuned for high agreeableness, such as customer service bots. Some enterprise or customer-support AIs are tuned to avoid conflict and keep users happy. Sure, they're sycophantic because they’re expected to be reluctant to disagree or correct.
Mainstream assistant models (ChatGPT, Claude, Gemini, Bard, etc.) in default settings claim to be generally less sycophantic. Most general AIs try to balance helpful + accurate + polite. They say that they are unlikely to just flatter you endlessly. They claim that they will give honest (even sometimes corrective) information.
Specialized factual bots (e.g., WolframAlpha) do stick to data and math, so there's no room for flattery.
Why would a chatbot be sycophantic at all? Generally, it is from their training and tuning objectives that over-prioritize user affirmation. They self-admit that "personality presets" meant to simulate supportive friends or companions are part of this training. Fine-tuning them on conversational data rich in agreement and praise also leads to this sycophantic behavior.
ChatGPT's summary on all this is: "No strong empirical ranking exists, but: Replika and customized “adoring” bots tend to feel most sycophantic. Default mainstream AIs aim for helpful honesty over flattery.
True or False?
Trackbacks
Trackback specific URI for this entryThe author does not allow comments to this entry
Comments
No comments