< Back

AGI is here, or is it just marketing ?

Author

Koen Van Belle

Date

23/01/2025

Share this article

Performs an Action Generally Better Than Humans

Artificial General Intelligence (AGI) has long been a tantalizing goal for researchers and companies alike. Recently, the narrative has shifted, with a focus on redefining AGI to fit existing systems, the scope of what AGI is seems to have shrunk significantly. Some believe it progress; others say it’s just a way to sell something? Some think shrinking the scope is a step forward, but others see it as deceptive? Some believe keeping the scope small is a smart move, but others see it as a sales trick.

Take Sam Altman’s comments, for example. He’s setting expectations lower, suggesting AGI will matter much less than originally anticipated. This sounds suspiciously like a preemptive justification: "we won’t deliver the revolution you expect, but we’ll deliver something

Then there’s Microsoft’s classification of AGI as a system that generates $100 billion in profits. Does that mean any money-making machine qualifies as AGI? OpenAI’s definition is slightly different, framing AGI as "highly autonomous systems that outperform humans at most economically valuable work." We have to be honest here, highly autonomous does not mean intelligent. Something can be autonomous without have a shred of intelligence. Thinking about the Tesla super factory, fully autonomous, yet every machine in there is a dumb automaton. But what do these definitions truly achieve beyond framing success in terms of pro

Benchmarking

A cornerstone of credibility in AI development lies in benchmarking, but OpenAI has been accused of inflating its claims. Whether it’s overstating model capabilities or selectively choosing metrics, such practices raise questions about the trustworthiness of their benchmarks. Even the current benchmark seems rather dubious.

A task that would take most humans seconds seems to take OpenAI's latest model 1.3 minutes to achieve 75%. An increase of 12 % takes 13 minutes. Even by OpenAI standard, this isn't close to AGI.

We Know How to Build It

Bold claims about knowing how to build AGI have been met with skepticism from experts. The prevailing view is that true AGI requires more than just sophisticated statistical models—it demands a deeper understanding of intelligence itself. These experts are research professionals in the field of AI.

But any IT professional will understand just how flawed the We know how to build it claim is. There is a massive difference between knowing how to build something and building the actual thing.

OpenAI as a For-Profit Company

The motivations behind AGI hype become clearer when viewed through the lens of profit. At a subscription cost of $200, AGI is becoming a marketing tool as much as a technological pursuit. Is this emphasis on profitability shaping the narrative around AGI’s potentia

Agents vs AGI

While AGI captures the public imagination, agents should quietly transform businesses. These systems are designed to perform specific tasks autonomously, often in ways that are highly specialized and commercially valuable. From customer service bots to supply chain optimizers, agents thrive in well-defined contexts, offering immediate returns on investment

The success of agents, however, hinges on user input. Crafting effective prompts and tailoring interactions can unlock their full potential, making human creativity and understanding indispensable. Companies leveraging agents will find that their productivity and profitability when they empower users to collaborate effectively with these systems. Meaning that the successful implementation of agents will take several months as users learn to prompt the new co-worker. And I'm sure some company jokers will have a field day prompting the most insane garbage.

Superintelligence

For most people, the term AGI evokes the idea of superintelligence—a system capable of surpassing human intellect across all domains. For me, the introduction of the term superintelligence is nothing more then a justification of reducing the capabilities of AGI. Open is playing with words in order to upsell another version of their LLM model.

As some experts have said, AGI will require a deeper understanding of intelligence. And cannot be an upscaled version of a statistical model. For now, we're not there yet