Is the AI Singularity Concept a Mathematical Mirage?
The Singularity Might Be a Mathematical Anomaly
There’s an assumption baked into almost every serious AI conversation right now that as progress accelerates, at some point the machines will surpass us. That moment gets labelled AGI, or the singularity, or sometimes the intelligence explosion. The curve goes vertical and history splits in two. It’s a compelling story, but I think it might be wrong.
In mathematics there’s something called an asymptote. A curve can accelerate toward a line forever without ever reaching it. The closer it gets, the harder each incremental move becomes. From a distance it looks like convergence, but up close it’s diminishing return.
Nature is full of these patterns, where systems grow fast when conditions are favourable, but slow down as friction builds. Limits always assert themselves, because nature loves asymptotes.
Despite all the hype, AI progress might be behaving exactly like that.
The early breakthroughs felt explosive because they were. Pattern recognition, language modelling, code generation. These were tractable problems once everything aligned, and they created the impression that intelligence itself was being “solved.”
But look closely at the kinds of domains that fell first.
When IBM’s Deep Blue defeated world chess champ Garry Kasparov in 1997 it was treated like a symbolic moment, man versus machine, and the press ate it up. Two decades later when AlphaGo beat Lee Sedol in the ancient Chinese strategy game Go, the story repeated. Another boundary crossed. But AI didn’t unlock infinite layers of either game. It approached optimal play within a bounded system and then improvement became marginal. The game didn’t change or expand. The rules set the ceiling, because chess and Go are closed systems with fixed objectives and defined end states. Once you’re near optimal within those constraints, there isn’t much ceiling left.
Once compute and training methods crossed a threshold, those domains yielded quickly. That tells us something specific. AI excels where rules are stable and feedback is clean. It doesn’t automatically follow that open ended, socially embedded, real world intelligence will fall the same way.
When large language models moved from research labs into mainstream use in late 2022 the shift was again sudden. But the underlying structure was similar to everything before it. Vast training data and easily measurable outcomes. The pattern isn’t mysterious. The first layers to fall are always the ones most compatible with scale. That’s not proof of an infinite curve. It might just mean we usually solve the solvable layers first.
And when you solve the solvable layers first, what remains is friction.
If intelligence behaves like an asymptote rather than a switch, progress doesn’t stop, it just gets harder and more expensive with each step forward, because the easy gains have largely been spent.
The last decade worked so well because everything aligned at once. Enormous amounts of human generated data, compute that scaled fast and cheap, plus problems that were genuinely well suited to the approach. That combination doesn’t just repeat on demand.
What’s left is now messier. The remaining problems involve limited feedback, vague goals, and situations where there’s no clean right answer because correctness depends more on context. Throwing more compute at that helps, but less each time.
The data problem is real too. As models increasingly train on AI generated content and recycled material, the signal gets thinner. Fresh, high quality data now requires actual human judgment and controlled environments to produce. Meanwhile compute keeps improving but the gains are becoming architectural rather than brute force, which is slower and harder to predict. And as AI moves into healthcare, law, finance, and infrastructure, every layer of safety and governance adds real friction to the process.
None of this means the wheels are coming off though. It just means the technology is maturing.
Here’s the other thing worth considering. Intelligence isn’t a single dial you turn up. Human cognition is layered and emotional and shaped by experience in ways that have no clean benchmark. And yet we’ve consistently shown that extraordinary competence can be built without any of that. Systems can outperform humans in specific domains without understanding anything about what they’re doing.
GPT-4 scoring near the top of certain professional exams is genuinely impressive. But it’s not evidence of consciousness. Passing an exam proves the system has mastered patterns within a constrained environment. Full stop.
From the outside that distinction feels philosophical but strategically it’s very practical.
Most boardrooms aren’t debating machine consciousness. They’re debating compliance, audit trails, liability, and data residency. The philosophy of mind is an interesting dinner conversation…but Procurement just wants to know if the vendor has SOC 2 certification.
That gap is where the real friction lives.
The spectacle fades…and integration problem remains.
It is easy to assume that if systems keep improving, then general intelligence is simply a matter of scale. But scale does not automatically produce coherence across contexts.
If the singularity requires subjective intelligence (a system that experiences and integrates itself as a self) then we may be chasing a theoretical limit rather than a practical milestone. And that fact totally reframes the relationship between humans and AI.
Successful organisations aren’t waiting for sentient machines. They’re looking for leverage, and for systems that scale judgement without scaling headcount.
Understanding the asymptote doesn’t remove the impact of AI progress, but it might remove the mythology.
Progress can slow while influence still grows.
A slowdown does not mean stagnation. It just looks like fewer jaw-dropping demos and more focus on reliability. More narrow, high-value deployments. Slower jumps between model generations. More emphasis on tooling, orchestration, and the systems around models. In other words, maturity.
If true AGI is mathematically unreachable, then the future isn’t a handover, it’s an integration challenge.
Human judgement remains the constraint layer and AI becomes the acceleration layer. It will always be a balance between direction and scale.
The strategic question isn’t whether machines will replace us. It’s whether organisations can build the capability to harness systems that are increasingly powerful and increasingly autonomous.
The companies that treat AI as a novelty will be outdone by the ones that treat it more like infrastructure. Doing that authentically requires discipline, and an understanding of where AI should amplify human decision making and where it should be limited. It also requires an understanding of constantly evolving governance requirements.
The singularity makes for good headlines, but the asymptote makes for better strategy.
And strategy, not spectacle, is what determines who will compound true advantage in the years ahead.