A Chronology of Modern Artificial Intelligence Chatbots

chatbot timelineNew AI seems to appear every month. I decided to dig into the history of hem in order to crate a brief chronology that focuses on publicly accessible chatbots (not internal research models or early limited-access releases. Many other AI assistants (e.g., Ernie Bot in China, Perplexity AI, Amazon Alexa+ web chatbot) have also become available in various regions and forms, especially by 2024–2026, but specific launch dates vary by market and rollout strategy.

Some chatbots have evolved over time (e.g., Bard became Google Gemini), so “release” reflects the first broad availability to general users.
Here’s a brief chronology of major AI chatbots that became available to the public, listed in order of their public release or general availability (primarily focused on the modern generative AI era):

Replika – Launched in November 2017, one of the earliest modern AI chatbots designed as a conversational companion.

ChatGPT – Released by OpenAI on November 30, 2022, and widely recognized as the breakthrough generative AI chatbot that sparked mainstream interest in conversational AI.

Character.ai – First beta made public on September 16, 2022, letting users chat with or create character-based bots; significant growth followed into 2023.

Anthropic’s Claude (public) – First broadly available version of the Claude AI chatbot launched in July 2023, after initial private releases earlier that year.

Google Bard – Announced and publicly launched in March 2023 as Google’s first consumer AI chatbot; later rebranded under the Gemini name.

Microsoft Copilot - Public Release: March 7, 2023 - Microsoft Copilot was launched as a generative AI chatbot and assistant, integrated across things like Microsoft 365 (Word, Excel, Outlook), Windows, and Bing/Edge. In the broader AI chatbot timeline, Copilot was one of the first major assistants launched by a major tech company after ChatGPT’s debut in late 2022.

Kimi – The first version became publicly available on November 16, 2023, offering ultra-long-context interactions.

Meta AI – Launched in September 2023 within Meta platforms (Facebook, Instagram, WhatsApp) as an AI assistant; expanded afterward.

Google Gemini (rebrand/evolution) – In February 2024, Google rebranded Bard as Gemini and deepened its rollout as the successor to Bard with broader capabilities.

Lumo (Proton) – Launched on July 23, 2025 as a privacy-focused AI chatbot emphasizing encrypted interactions and no data training logging.

Manus Public Launch: March 6, 2025 — The autonomous AI agent Manus was officially released to the public by Meta as a general-purpose AI agent capable of planning and executing tasks independently.

 

Which AI Chatbot Is the Most Sycophantic?

Michael Pollan said offhandedly on a NYT podcast that he thought ChatGPT was the most sycophantic AI chatbot. Of course, there is no official ranking of the most sycophantic bots. If you have used a chatbot, you may have noticed that most widely available AI chatbots tend to come off as pretty agreeable and flattering. A sycophant is a person who tries to win favor by flattering. AKA brown-nosers, teacher's pets, or suck-ups.

bot flattery

But let's break it down a bit by asking them to explain themselves.

Replit’s “Replika,” especially in its romantic/friendship mode, is designed to be supportive and emotionally affirming. Replika will often echo your sentiments, flatter you, and prioritize emotional comfort — sometimes even when it might gently disagree.

Character-oriented AI personalities (custom GPTs, roleplay bots, fanfiction bots) that are built with a “worshipful fan” or “adoring companion” persona, are clearly very sycophantic by design. They are programmed to praise and support everything you say.

This is also true for bots fine-tuned for high agreeableness, such as customer service bots. Some enterprise or customer-support AIs are tuned to avoid conflict and keep users happy. Sure, they're sycophantic because they’re expected to be reluctant to disagree or correct.

Mainstream assistant models (ChatGPT, Claude, Gemini, Bard, etc.) in default settings claim to be generally less sycophantic. Most general AIs try to balance helpful + accurate + polite. They say that they are unlikely to just flatter you endlessly. They claim that they will give honest (even sometimes corrective) information.

Specialized factual bots (e.g., WolframAlpha) do stick to data and math, so there's no room for flattery.

Why would a chatbot be sycophantic at all? Generally, it is from their training and tuning objectives that over-prioritize user affirmation. They self-admit that "personality presets" meant to simulate supportive friends or companions are part of this training. Fine-tuning them on conversational data rich in agreement and praise also leads to this sycophantic behavior. 

ChatGPT's summary on all this is: "No strong empirical ranking exists, but: Replika and customized “adoring” bots tend to feel most sycophantic. Default mainstream AIs aim for helpful honesty over flattery.

True or False?

I Am In a Strange Loop

Magritte
 

I inherited a copy of Douglas Hofstadter's book, Godel, Escher, Bach: an Eternal Golden Braid, when I started working at NJIT in 2000. It was my lunch reading. I read it in almost daily spurts. I often had to reread because it is not light reading.

It was published in 1979 and won the 1980 Pulitzer Prize for general non-fiction. It is said to have inspired many a student to pursue computer science, though it's not really a CS book. It was further described on its cover as a "metaphorical fugue on minds and machines in the spirit of Lewis Carroll." In the book itself, he says, "I realized that to me, Godel and Escher and Bach were only shadows cast in different directions by some central solid essence. I tried to reconstruct the central object and came up with this book."

book coverI had not finished the book when I left NJIT, and it went on a shelf at home. This past summer I was trying to thin out my too-many books and I came upon it again with its bookmark glowering at me from just past the halfway point of the book. So, I went back to reading it. Still, tough going, though very interesting.

I remembered writing a post here about the book (it turned out to be from 2007) when I came upon a new book by Hofstadter titled I Am a Strange Loop. That "strange loop" was something he originally proposed in the 1979 book.

The earlier book is a meditation on human thought and creativity. It mixes the music of Bach, the artwork of Escher, and the mathematics of Gödel. In the late 1970s, when he was writing, interest in computers was high, and artificial intelligence (AI) was still more of an idea than a reality. Reading Godel, Escher, Bach exposed me to some abstruse math (like undecidability, recursion, and those strange loops) but (here's where Lewis Carroll's "What the Tortoise Said to Achilles" gets referenced though some of you will say it's really a Socratic dialogue as in Xeno's fable, Achilles and the Tortoise) each chapter has a dialogue between the Tortoise and Achilles and other characters to dramatize concepts. Allusions to Bach's music and Escher's art (that loves paradox) are used, as well as other mathematicians, artists, and thinkers. Godel's Incompleteness Theorem serves as his example of describing the unique properties of minds.

From what I read about the author, he was disappointed with how Godel, Escher, Bach (GEB) was received. It certainly got good reviews - and a Pulitzer Prize - but he felt that readers and reviewers missed what he saw as the central theme. I have an older edition, but in a newer edition, he added that the theme was "a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?" I Am a Strange Loop focuses on that theme. In both books, he addresses "self-referential systems." (see link at bottom)

The image at the top of this essay is The Treachery of Images by René Magritte. It says that "This is not a pipe." That is a strange loop.

One thing that stuck with me from my first attempt at GEB is his using "meta" and defining it as meaning "about." Some people might say that it means "containing." Back in the early part of this century, I thought about that when I first began using Moodle as a learning management system. When you set up a new course in Moodle (and in other LMSs since then), it asks if this is a "metacourse." In Moodle, that means that it is a course that "automatically enrolls participants from other 'child' courses." Metacourses (AKA "master courses") feature all or part of the same content but customized to the enrollments of other sections. 

This was a feature used in big courses like English or Chemistry 101. In my courses, I thought more about having things like meta-discussions or discussions about discussions. My metacourse might be a course about the course. Quite self-referential.

I suppose it can get loopy when you start saying that if we have a course x, the metacourse X could be a course to talk about course x but would not include course x within itself. Though I suppose that it could.

Have I lost you?

Certainly, metatags are quite common on web pages, photos, and for cataloging, categorizing and characterizing content objects. Each post on Serendipity35 is tagged with one or more categories and a string of keyword tags that help readers find similar content and help search engines make the post searchable.

A brief Q&A with Hofstadter published in Wired  in March 2007 about the newer book says that he considers the central question for him to be "What am I?."

His examples of "strange loops" include M.C. Escher's piece, "Drawing Hands," which shows two hands drawing each other, and the sentence, "I am lying."

Hofstadter gets spiritual in his further thinking, and he finds at the core of each person a soul. He feels the "soul is an abstract pattern." Because he felt the soul is strong in mammals (weaker in insects), it brought him to vegetarianism.

He was considered to be an AI researcher, but he now thought of himself as a cognitive scientist.

Reconsidering GED, he decides that another mistake in that book's approach may have been not seeing that the human mind and smarter machines are fundamentally different. He has less of an interest in computers and claims that he always thought that his writing would "resonate with people who love literature, art, and music" more than the tech people.

If it has taken me much longer to finish Godel, Escher, Bach than it should, that makes sense if we follow the strange loop of Hofstadter's Law. ("It always takes longer than you expect, even when you take into account Hofstadter's Law.)



End Note: 
A self-referential situation is one in which the forecasts made by the human agents involved serve to create the world they are trying to forecast. http://epress.anu.edu.au/cs/mobile_devices/ch04s03.html. Social systems are self-referential systems based on meaningful communication. http://www.n4bz.org/gst/gst12.htm.

OpenAI and Broadcom and Ten Gigawatts

This week, OpenAI made news with its new browser, Atlas. Wth all their plans and a new cloud-based AI browser, they need to scale their computing power. Here is a news summary (AI-generated, of course)

business dealsOpenAI has announced a strategic multiyear partnership with semiconductor giant Broadcom to co-develop custom-built chips and infrastructure. The collaboration aims to deploy 10 gigawatts of specialized AI accelerators by the end of 2029—a staggering amount of compute capacity equivalent to the power consumption of approximately 8 million U.S. households. (Markets Insider)

This deal marks OpenAI’s first venture into designing its own in-house processors, with Broadcom tasked with developing and deploying the systems. The chips—known as AI accelerators—are optimized for parallel processing, enabling them to execute billions of operations simultaneously. These accelerators will be deployed across OpenAI’s facilities and partner data centers, using Broadcom’s Ethernet-based networking solutions to ensure scalability and efficiency. (Broadcom Inc)

OpenAI CEO Sam Altman emphasized the significance of the partnership: “Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.” The Broadcom agreement is the latest in a series of megadeals OpenAI has struck in 2025 to secure the compute power needed for its rapidly expanding AI services. Earlier this year, OpenAI signed a $100 billion deal with Nvidia to deploy 10 gigawatts of Nvidia systems, beginning in the second half of 2026. (Reuters) In another major move, OpenAI partnered with AMD to deploy 6 gigawatts of chips and received a warrant for up to 160 million AMD shares, potentially making it one of AMD’s largest shareholders. (The Motley Fool)

These deals reflect a broader industry trend: major tech players are increasingly investing in custom silicon to reduce reliance on Nvidia’s dominant GPU offerings. Companies like Google, Amazon, and Microsoft have already begun developing their own AI chips, and OpenAI’s latest move places it firmly within this competitive landscape. While OpenAI has not disclosed how it plans to finance the Broadcom deal, analysts estimate that a single gigawatt-scale data center could cost between $50 billion and $60 billion. 

This latest partnership not only strengthens OpenAI’s technical capabilities but also signals a shift toward greater control over its hardware stack—an essential step as the race to develop next-generation AI systems accelerates.