Moving Closer to Superintelligence
It is difficult to keep up with AI advances and new tools. Recently, I have seen the term "superintelligence" being used and I had to look for a definition.
In AI terms, there are three kinds of intelligence. "Artificial Narrow Intelligence" is what we have now. It is "superhuman" at specific tasks like playing Go or translating languages. ChatGPT, Gemini, CoPilot and Meta AI, et al fit in there at the moment.
"Artificial General Intelligence (AGI)" is human-level across the board and can learn anything a person can learn. We’re not quite there yet as of May 2026.
"Artificial Superintelligence (ASI)" is far beyond human level. Philosopher Nick Bostrom popularized the term: and defined it as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”
ASI is what people worry about — or get excited about — when talking about advanced AI. But AGI isn't quite the same as superintelligence. With AGI, you clone the best human brain in software, but with superintelligence that clone keeps upgrading itself until it’s as far beyond us.
Two new tools are moving closer to the next level.
Google has released TurboQuant, a new compression method that makes AI models cheaper to run and faster to respond. In Google’s reported tests, it reduced the key-value cache, the model’s short-term working memory while it responds, by at least 6x and improved performance by up to 8x on H100 chips, Nvidia’s high-end AI processors used in data centres, while keeping benchmark performance, or standard test performance, close to the original model. That is a serious technical result with a clear business consequence: one of the biggest cost pressures in modern AI may begin to ease. For the past two years, the default logic has been simple. The best AI stayed in the cloud because that is where companies could absorb the cost of running it. TurboQuant starts to weaken that logic.
Meta TRIBE v2 is a foundation AI model that acts like a “digital twin” of the human brain. In plain terms, it’s an AI trained on real brain scan data so it can predict how a person’s brain will respond to things they see, hear, or read. It takes in video, audio, and text, then maps that to about 70,000 areas of the brain to simulate neural activity. Meta itself says that you can think of it as Meta teaching an AI to “think” more as humans do, by learning directly from brain responses instead of just internet text.
Where did I get information anout Meta's products and path? From their own Muse Spark. That is Meta’s latest (well, as of today) AI assistant model.
I have referenced 