Which AI Chatbot Is the Most Sycophantic?

Michael Pollan said offhandedly on a NYT podcast that he thought ChatGPT was the most sycophantic AI chatbot. Of course, there is no official ranking of the most sycophantic bots. If you have used a chatbot, you may have noticed that most widely available AI chatbots tend to come off as pretty agreeable and flattering. A sycophant is a person who tries to win favor by flattering. AKA brown-nosers, teacher's pets, or suck-ups.

bot flattery

But let's break it down a bit by asking them to explain themselves.

Replit’s “Replika,” especially in its romantic/friendship mode, is designed to be supportive and emotionally affirming. Replika will often echo your sentiments, flatter you, and prioritize emotional comfort — sometimes even when it might gently disagree.

Character-oriented AI personalities (custom GPTs, roleplay bots, fanfiction bots) that are built with a “worshipful fan” or “adoring companion” persona, are clearly very sycophantic by design. They are programmed to praise and support everything you say.

This is also true for bots fine-tuned for high agreeableness, such as customer service bots. Some enterprise or customer-support AIs are tuned to avoid conflict and keep users happy. Sure, they're sycophantic because they’re expected to be reluctant to disagree or correct.

Mainstream assistant models (ChatGPT, Claude, Gemini, Bard, etc.) in default settings claim to be generally less sycophantic. Most general AIs try to balance helpful + accurate + polite. They say that they are unlikely to just flatter you endlessly. They claim that they will give honest (even sometimes corrective) information.

Specialized factual bots (e.g., WolframAlpha) do stick to data and math, so there's no room for flattery.

Why would a chatbot be sycophantic at all? Generally, it is from their training and tuning objectives that over-prioritize user affirmation. They self-admit that "personality presets" meant to simulate supportive friends or companions are part of this training. Fine-tuning them on conversational data rich in agreement and praise also leads to this sycophantic behavior. 

ChatGPT's summary on all this is: "No strong empirical ranking exists, but: Replika and customized “adoring” bots tend to feel most sycophantic. Default mainstream AIs aim for helpful honesty over flattery.

True or False?

I Am In a Strange Loop

Magritte
 

I inherited a copy of Douglas Hofstadter's book, Godel, Escher, Bach: an Eternal Golden Braid, when I started working at NJIT in 2000. It was my lunch reading. I read it in almost daily spurts. I often had to reread because it is not light reading.

It was published in 1979 and won the 1980 Pulitzer Prize for general non-fiction. It is said to have inspired many a student to pursue computer science, though it's not really a CS book. It was further described on its cover as a "metaphorical fugue on minds and machines in the spirit of Lewis Carroll." In the book itself, he says, "I realized that to me, Godel and Escher and Bach were only shadows cast in different directions by some central solid essence. I tried to reconstruct the central object and came up with this book."

book coverI had not finished the book when I left NJIT, and it went on a shelf at home. This past summer I was trying to thin out my too-many books and I came upon it again with its bookmark glowering at me from just past the halfway point of the book. So, I went back to reading it. Still, tough going, though very interesting.

I remembered writing a post here about the book (it turned out to be from 2007) when I came upon a new book by Hofstadter titled I Am a Strange Loop. That "strange loop" was something he originally proposed in the 1979 book.

The earlier book is a meditation on human thought and creativity. It mixes the music of Bach, the artwork of Escher, and the mathematics of Gödel. In the late 1970s, when he was writing, interest in computers was high, and artificial intelligence (AI) was still more of an idea than a reality. Reading Godel, Escher, Bach exposed me to some abstruse math (like undecidability, recursion, and those strange loops) but (here's where Lewis Carroll's "What the Tortoise Said to Achilles" gets referenced though some of you will say it's really a Socratic dialogue as in Xeno's fable, Achilles and the Tortoise) each chapter has a dialogue between the Tortoise and Achilles and other characters to dramatize concepts. Allusions to Bach's music and Escher's art (that loves paradox) are used, as well as other mathematicians, artists, and thinkers. Godel's Incompleteness Theorem serves as his example of describing the unique properties of minds.

From what I read about the author, he was disappointed with how Godel, Escher, Bach (GEB) was received. It certainly got good reviews - and a Pulitzer Prize - but he felt that readers and reviewers missed what he saw as the central theme. I have an older edition, but in a newer edition, he added that the theme was "a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?" I Am a Strange Loop focuses on that theme. In both books, he addresses "self-referential systems." (see link at bottom)

The image at the top of this essay is The Treachery of Images by René Magritte. It says that "This is not a pipe." That is a strange loop.

One thing that stuck with me from my first attempt at GEB is his using "meta" and defining it as meaning "about." Some people might say that it means "containing." Back in the early part of this century, I thought about that when I first began using Moodle as a learning management system. When you set up a new course in Moodle (and in other LMSs since then), it asks if this is a "metacourse." In Moodle, that means that it is a course that "automatically enrolls participants from other 'child' courses." Metacourses (AKA "master courses") feature all or part of the same content but customized to the enrollments of other sections. 

This was a feature used in big courses like English or Chemistry 101. In my courses, I thought more about having things like meta-discussions or discussions about discussions. My metacourse might be a course about the course. Quite self-referential.

I suppose it can get loopy when you start saying that if we have a course x, the metacourse X could be a course to talk about course x but would not include course x within itself. Though I suppose that it could.

Have I lost you?

Certainly, metatags are quite common on web pages, photos, and for cataloging, categorizing and characterizing content objects. Each post on Serendipity35 is tagged with one or more categories and a string of keyword tags that help readers find similar content and help search engines make the post searchable.

A brief Q&A with Hofstadter published in Wired  in March 2007 about the newer book says that he considers the central question for him to be "What am I?."

His examples of "strange loops" include M.C. Escher's piece, "Drawing Hands," which shows two hands drawing each other, and the sentence, "I am lying."

Hofstadter gets spiritual in his further thinking, and he finds at the core of each person a soul. He feels the "soul is an abstract pattern." Because he felt the soul is strong in mammals (weaker in insects), it brought him to vegetarianism.

He was considered to be an AI researcher, but he now thought of himself as a cognitive scientist.

Reconsidering GED, he decides that another mistake in that book's approach may have been not seeing that the human mind and smarter machines are fundamentally different. He has less of an interest in computers and claims that he always thought that his writing would "resonate with people who love literature, art, and music" more than the tech people.

If it has taken me much longer to finish Godel, Escher, Bach than it should, that makes sense if we follow the strange loop of Hofstadter's Law. ("It always takes longer than you expect, even when you take into account Hofstadter's Law.)



End Note: 
A self-referential situation is one in which the forecasts made by the human agents involved serve to create the world they are trying to forecast. http://epress.anu.edu.au/cs/mobile_devices/ch04s03.html. Social systems are self-referential systems based on meaningful communication. http://www.n4bz.org/gst/gst12.htm.

OpenAI and Broadcom and Ten Gigawatts

This week, OpenAI made news with its new browser, Atlas. Wth all their plans and a new cloud-based AI browser, they need to scale their computing power. Here is a news summary (AI-generated, of course)

business dealsOpenAI has announced a strategic multiyear partnership with semiconductor giant Broadcom to co-develop custom-built chips and infrastructure. The collaboration aims to deploy 10 gigawatts of specialized AI accelerators by the end of 2029—a staggering amount of compute capacity equivalent to the power consumption of approximately 8 million U.S. households. (Markets Insider)

This deal marks OpenAI’s first venture into designing its own in-house processors, with Broadcom tasked with developing and deploying the systems. The chips—known as AI accelerators—are optimized for parallel processing, enabling them to execute billions of operations simultaneously. These accelerators will be deployed across OpenAI’s facilities and partner data centers, using Broadcom’s Ethernet-based networking solutions to ensure scalability and efficiency. (Broadcom Inc)

OpenAI CEO Sam Altman emphasized the significance of the partnership: “Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.” The Broadcom agreement is the latest in a series of megadeals OpenAI has struck in 2025 to secure the compute power needed for its rapidly expanding AI services. Earlier this year, OpenAI signed a $100 billion deal with Nvidia to deploy 10 gigawatts of Nvidia systems, beginning in the second half of 2026. (Reuters) In another major move, OpenAI partnered with AMD to deploy 6 gigawatts of chips and received a warrant for up to 160 million AMD shares, potentially making it one of AMD’s largest shareholders. (The Motley Fool)

These deals reflect a broader industry trend: major tech players are increasingly investing in custom silicon to reduce reliance on Nvidia’s dominant GPU offerings. Companies like Google, Amazon, and Microsoft have already begun developing their own AI chips, and OpenAI’s latest move places it firmly within this competitive landscape. While OpenAI has not disclosed how it plans to finance the Broadcom deal, analysts estimate that a single gigawatt-scale data center could cost between $50 billion and $60 billion. 

This latest partnership not only strengthens OpenAI’s technical capabilities but also signals a shift toward greater control over its hardware stack—an essential step as the race to develop next-generation AI systems accelerates.

Letting the Atlas Browser Shop for You

OpenAI's new Atlas browser has an Agent mode, which allows it to navigate sites, fill carts, and perform real-world tasks. Can it do it well for you - and will you trust it to do it for you?

Elyse Betters Picaro, a Senior Contributing Editor at zdnet.com, did a shopping test. For Plus and Pro users (only on Mac IOS for now), it includes a powerful Agent mode so that ChatGPT can take over your browser, click around, and perform tasks for you.

She tried having it do an order from Walmart and reported the process and results. You put n a prompt, just as you would for any chatbot request. Unsurprisingly, the more specific the prompt, the better the result. (GIGO still lives!) Initially, she asked, "Order me wood putty, paintable caulk, and 2-inch screws from Walmart." In a way out of some sci-fi story from the past, the Agent took over the cursor and all while she watched.

Her test is enough of a cautionary tale that I have not tried a similar test myself at this point. Privacy fears...

screenshot of order

Image by Elyse Betters Picaro via ZDNET

Walmart's site created a hurdle (a language-selection pop-up) that blocked Atlas' Agent from continuing, even though she had it access to her Chrome data and Keychain. That's frightening, but consider that she had already given the key data to those two other companies. It couldn't log in to Walmart, didn't know her location or default store, even though she has ordered from them before. She revised her prompt (as we have all done) and on the third try she prompted: "Order me 5 wood putty, 5 paintable caulk, and one pack of 2-inch screws. I want them delivered to my house from the Malone, NY, location in an hour. I've ordered these before, so use my past purchases to find the right products and brands I use."

It worked. The agent used her purchase history, searched for the products in past orders, and added them to her cart. It didn't complete the order but paused at the checkout screen so she could select a delivery window, adjust the tip, and confirm payment.

Does this amaze you, excite you, or frighten you? I'm glad that others are testing it. I've hit the pause button.