Rick and Morty and the Human Cost of Artificial Intelligence
There’s a particular kind of comedy that doesn’t so much “age well” as simply wait for the rest of us to catch up. You revisit a truly great satire years later and often think, oh wow they can predict the future.
But the truth is that satirical comedy writers aren’t psychic. They’re just exceptionally skilled at observing emergent behaviour.
Especially the kind of emergent behaviour that makes us squirm with self-awareness.
High-brow futurist comedy has never really cared about the specific gadgets. It cares more about how people act once the usual constraints are removed. This is a familiar storytelling trick: make characters rich, powerful, or untouchable, and suddenly their worst instincts are much harder to hide. Rick and Morty applies that logic specifically to technology. It doesn’t imagine “better” tech so much as fewer brakes. A world where human nature has nowhere to hide.

On a rewatch, the predictions of Rick and Morty land with unsettling precision. Glootie’s manic insistence on building an app at all costs now reads like a perfect parody of vibe coding, with zero-friction creation, instant shipping, and a complete lack of interest in downstream consequences. Meeseeks feel like early stateless agents, spun up on demand, hyper-motivated to complete a task, and prone to spiralling the moment the goal turns out to be poorly defined, context-dependent, or worst of all, Jerry shaped and messily human. Even the butter robot feels like modern AI tooling in its purest form. Hyper-capable, narrowly scoped, yet just self-aware enough to ask the one uncomfortable question no one wants to answer.

These aren’t just jokes about technology. They’re jokes about the concepts of how intelligence behaves once it becomes ambient, and who stays responsible once it’s running.
Rick and Morty keeps returning to those questions, usually without appearing to even ask them.
Rick is the archetype of the modern AI builder. He’s infinitely capable, relentlessly efficient, and profoundly uninterested in maintenance. He invents, delegates, and moves on. He spins up systems of universes, agents, and tools, and then abandons them the moment they’ve served their purpose. Alignment is assumed and oversight is boring. Consequences are someone else’s problem.
And that someone else is usually Morty.

Morty isn’t intelligent in the way Rick is intelligent. He doesn’t automate or build tools that scale. What he does, relentlessly, is stay and exist within the systems that Rick creates. He’s there for the aftermath, the edge cases, and the emotional fallout. He carries the moral discomfort and trauma that can’t be abstracted away or solved with another layer of intelligence.
In modern AI terms, Morty is the human in the loop. Except he’s not there to provide oversight so much as to absorb damage. He’s the cost centre no one models for. He’s the unscalable component in an otherwise elegant system, and the reminder that intelligence may scale but meaning, care, and accountability, alas, do not.
This is where the show’s satire becomes unsettlingly precise. As AI systems grow more agentic and more capable of acting autonomously, the human role subtly shifts. Fewer people are needed to think and more are needed to clean up, review, and moderate.
Morty isn’t “bad” at the systems. His messy disruptive presence is actually the key element that makes them survivable in the end.

We laugh because he’s emotional and neurotic and he asks questions no one wants to answer mid-adventure. But look closer and he’s the only character consistently dealing with what happens after the clever idea has already moved on. He lives inside systems he didn’t choose, carries consequences he didn’t create, and absorbs the moral residue so Rick doesn’t have to. Rick gets the breakthrough. Morty gets the aftermath.
Seen through that lens, Rick and Morty doesn’t just predict AI trends, it predicts the organisational and human reality that follows. It doesn’t try to forecast models or platforms. Instead it watches incentives and exaggerates behaviour. It follows logic patiently to its most uncomfortable conclusion and lets the audience laugh. Not because it’s absurd, but because it’s recognisable.There’s a line Rick throws away early in the series, usually right before something goes catastrophically wrong: “Don’t think about it.” It’s a joke, but it’s also a philosophy. Move fast, build faster, and don’t linger on consequences. If something breaks, there’s always another universe.

That posture made sense when intelligence was rare and effort was the bottleneck. It makes far less sense in a world where intelligence is ambient and agents can plan, generate, decide, and act at machine speed, and where the real constraint is no longer capability but responsibility.
AI didn’t create that tension. It just exposed it.
As systems become more agentic, more autonomous, and more capable of chaining actions without waiting for permission, the human role doesn’t disappear. It changes. Fewer people are needed to invent, and more are needed to live successfully with what’s been invented. In Rick and Morty, that role is always Morty’s. The one who stays when the portal closes. The one who can’t just reset the timeline and move on. Morty doesn’t resist what’s happening, he just feels it. He says the quiet part out loud. “I’m not okay with this”…and then he keeps going anyway.

The future shaped by AI won’t be defined by how smart our systems get, but by what’s left for humans to do once they’re running. As intelligence scales, responsibility doesn’t disappear, it concentrates. The question isn’t whether we can build increasingly powerful systems. It’s whether we’re prepared to own the “I’m not okay with this” moments that someone still has to carry once the clever idea has already moved on.
What Rick and Morty ultimately reveals isn’t just that intelligence scales faster than responsibility, it’s that responsibility doesn’t vanish when intelligence grows. It relocates, it pools, and it ends up with whoever is left in the room once the clever work is done.
Morty isn’t there because Rick planned for him. He’s there because someone has to be. Someone has to live inside the system after it ships. Someone has to deal with the edge cases, the moral residue, the unintended consequences that don’t show up at the moment of creation. Rick’s inventions almost always “work.” The damage often happens downstream, offscreen, or to someone else.

That’s the part most AI conversations still underplay. Not whether systems will become more capable (they will) but where the human weight of those systems will land once they do. The question is who absorbs the confusion and who feels the impact when optimisation meets reality.
Rick and Morty doesn’t argue that we should slow down, or stop building, or abandon intelligence. It simply shows what happens when we don’t design deliberately for the Mortys of the world. When someone becomes Morty by accident instead of by design, the result can be messy and stressful. It exposes what happens when human judgment, care, and accountability are treated as afterthoughts rather than primary parts of the system.
Morty’s role isn’t to resist progress, it’s to make it survivable.
And the real question AI leaves us with isn’t whether we can build systems that think, it’s whether we’re willing to design for the humans who have to stay once they do.

