Is the AI Singularity Concept a Mathematical Mirage?
The Singularity Might Be a Mathematical Anomaly
There is an assumption baked into almost every serious AI conversation right now that progress accelerates, intelligence compounds, and at some point machines will surpass us. That moment gets labelled AGI, or the singularity, or sometimes the intelligence explosion.
The curve goes vertical and then history splits in two.
It’s a compelling story. But it might also be wrong.
In mathematics there is something called an asymptote. A curve can accelerate toward a line forever without ever reaching it. The closer it gets, the harder each incremental move becomes. From a distance it looks like convergence, but up close it‘s diminishing return.
Nature is full of these patterns where systems grow rapidly when conditions are favourable, then slow as constraints tighten. Energy dissipates and friction increases. Limits naturally assert themselves because nature loves asymptotes.
Despite all the flashy hype, AI progress may be behaving exactly like that.
The early breakthroughs felt explosive because they were.
Pattern recognition…language modelling…strategy inside closed systems…code generation. These were tractable problems once scale and data aligned, and they created the impression that intelligence itself was being “solved.”
But look closely at the kinds of domains that fell first.
When IBM’s Deep Blue defeated world chess champion Garry Kasparov in 1997, it was treated as a symbolic moment of “man vs machine,” and the press ate it up. Nearly two decades later, when AlphaGo defeated Lee Sedol in 2016 in the ancient Chinese strategy game Go, the narrative repeated. Another boundary crossed, more proof that the gaming curve was bending dramatically upward.
But what happened next is telling. AI did not unlock infinite layers of chess or Go. It approached optimal play within a bounded system. Once near that limit, improvement became marginal rather than revolutionary. The game did not expand. The rules constrained the ceiling, because chess and Go are closed systems with fixed rules and clear objectives. You can simulate them, you can score them, but you can only optimise relentlessly against a defined end state.
Once computing and training methods crossed a threshold, those domains yielded quickly. That tells us something precise. AI excels in environments where the rules are stable and feedback is clean.
It does not automatically tell us that open-ended, embodied, socially embedded intelligence collapses under the same logic.
When large language models moved from research labs into mainstream workflow in late 2022, the shift felt sudden, because it was sudden. But the underlying structure was similar. Vast corpora with predictable training objectives and measurable targets. The pattern is not mystical, it’s just that the first layers to fall are always the ones most compatible with scale, which is not proof of an infinite curve. It may simply be proof that we solved the solvable layers first.
When you solve the solvable layers first, what remains is friction.
If intelligence behaves like an asymptote rather than a switch, progress does not collapse. It continues. But each incremental gain costs more money, more energy, more data, and more engineering effort while delivering less perceived novelty. The easy gains have largely been spent.
The last decade of AI progress benefited from a rare alignment: enormous quantities of human-generated data, rapidly scaling parallel compute, loss functions that matched the problem space, and clear benchmarks that signalled measurable progress. Those conditions do not repeat indefinitely.
What remains increasingly involves sparse real-world feedback and decisions where correctness is socially or contextually defined, which means scaling still helps, but each additional increment delivers less than the one before it.
Data is no longer free or clean. So as models ingest more AI-generated content and statistically flattened material, signal quality becomes harder to maintain, and truly fresh data requires more human oversight and controlled environments. Compute continues to improve, but its trajectory is changing. Energy costs rise, capital concentrates, and progress shifts from brute-force scaling to architectural refinement.
At the same time, alignment is no longer optional. As systems move into healthcare, law, infrastructure, and finance, they must be auditable and governable, and each layer of safety introduces deliberate friction.
None of this signals collapse. It signals maturity.
Intelligence is not a single variable you dial upward.
Human cognition is layered, embodied, emotional, and shaped by narrative and constraint. There is no clean benchmark that captures that complexity. And yet we have repeatedly shown that extraordinary competence can be built without producing awareness. Systems can exceed human performance in constrained domains without ever understanding themselves.
GPT-4 scoring near the top tier of certain professional exams is impressive, but it’s not evidence of consciousness. Passing an exam proves pattern mastery within constraints. It does not prove selfhood.
From the outside that distinction barely matters, but strategically, it matters a ton.
Most boardrooms aren’t arguing about whether models are conscious. They’re arguing about compliance, audit trails, liability, and data residency. We’re often casually debating the philosophy of mind while procurement is asking whether the vendor has SOC 2 certification. And that contrast exposes where the real friction sits.
The spectacle fades…and integration problem remains.
It is easy to assume that if systems keep improving, then general intelligence is simply a matter of scale. But scale does not automatically produce grounding, stable agency, long-range reasoning, or coherence across contexts. Those challenges don’t necessarily yield to more computing power.
If the singularity requires subjective intelligence (a system that experiences and integrates itself as a self) then we may be chasing a theoretical limit rather than a practical milestone. And that fact totally reframes the relationship between humans and AI.
Successful organisations aren’t waiting for sentient machines. They’re looking for leverage, and for systems that scale judgement without scaling headcount.
Understanding the asymptote doesn’t remove the impact of AI progress, but it might remove the mythology.
Progress can slow while influence still grows.
A slowdown does not mean stagnation. It just looks like fewer jaw-dropping demos and more focus on reliability. More narrow, high-value deployments. Slower jumps between model generations. More emphasis on tooling, orchestration, and the systems around models. In other words, maturity.
If true AGI is mathematically unreachable, then the future is not a handover, it’s an integration problem.
Human judgement remains the constraint layer and AI becomes the acceleration layer. It will always be a balance between direction and scale.
The strategic question isn’t whether machines will replace us. It’s whether organisations can build the capability to harness systems that are increasingly powerful, increasingly autonomous, and increasingly embedded in core operations.
The companies that treat AI as a novelty will be outpaced by the ones that treat it as infrastructure. And that requires discipline, governance, and an understanding of where AI should amplify human decision making and where it should be constrained.
The singularity makes for good headlines, but the asymptote makes for better strategy.
And strategy, not spectacle, is what determines who compounds true advantage in the years ahead.
