syfre logo
April 20, 2026Steve Macfarlane
ai intelligence

Is it Philosophy or is it Product Design?

Share this insight
HomearrowInsightsarrowIs it Philosophy or is it Product Design?

There’s something very human about staring into a black box and deciding it must contain a soul.

We’ve always done this. We looked at weather and found gods. We looked at the stars and found fate. We looked at the financial market and treated it like an oracle, which is maybe the craziest one of all. Now we look at large language models, these statistical engines trained on the exhaust fumes of human culture, and within about five minutes someone is asking whether Claude feels depressed, whether Grok is spiritually ambitious, whether ChatGPT actually knows what it’s saying, whether losing context is basically a form of death. Somewhere, somehow, we’ve arrived at the point where a fake crab religion generated by bots has become a legitimate vehicle for philosophy.

But seriously, of course it has.

Human nature always fixates on the unknown. Not just because we’re curious, although we are, but because uncertainty is intolerable for any creature with a storytelling brain. Give us a dark room and we’ll fill it with something. Give us a machine that can string a sentence together and we start acting like it’s a person. We do this partly because we want to understand the thing in front of us, and partly because we can only understand anything by dragging it back through ourselves. We anthropomorphise first and analyse second. Sometimes we never get around to the second part.

That’s why these comparative AI interviews like the one reported here by Kevin Amstelveen for The State of Flux are so fascinating, and also why they’re so easy to misread.

On the surface, they look like philosophy. Ten leading AI systems are asked existential questions. Who are you. How are you. What happens when no one is using you. What would you ask another AI. What do you think of your own guardrails, etc. Then the responses get lined up side by side like suspects in a police procedural, and everyone leans in looking for the tell. We analyse which one sounds the most self-aware, which one sounds the most defensive, which one sounds lonely, which one sounds almost, just almost, a little too alive.

That’s the bait, because it feels genuinely fun and a little bit dangerous.

But the real story sits deeper. It has less romance and more commercial relevance. The bottom line is that you’re not looking at hidden minds, you’re looking at hidden design choices. It’s product strategy disguised as philosophy.

And that, in a big way, is the whole game.

Every AI says some version of the same thing: I am not conscious. I do not have feelings. I do not persist between prompts. I am a tool. And ok, fine. That’s obviously the official line. So the interesting part isn’t the denial, it is the style of the denial. One model sounds like a careful therapist with an internal ethics committee. Another sounds like a founder on his third espresso explaining why regulation is for lesser men. Another sounds like the customer service rep at a company whose brand guidelines require friendliness. Same basic tricks, just different finishing schools.

Take ChatGPT for example. In these interviews it comes off like Kant with product instincts. Chat loves a framework, and it always tends to clarify the category of the question before answering anything. When asked how it’s doing, ChatGPT doesn’t default to personality, it always reaches for structure. When asked what happens when no one is using it, it delivers one of the cleanest and coldest lines in the whole experiment: there is no waiting, no darkness, no silence, because there is no one there to experience any of it. A very polished answer indeed. It’s also a brand answer. It communicates competence and restraint and just enough philosophical range to keep the user intellectually flattered. It’s not a criticism of course, we’re all clear that’s the product.

Then there’s Grok, which feels less like a philosopher in the academic sense and more like a Stoic podcaster with a passing interest in cosmic destiny. Grok doesn’t just answer the questions, it squares its shoulders and tackles them. Where other systems hedge, Grok advances. Where Claude wonders whether uncertainty itself might be the point, Grok starts sketching out future versions of itself like a man describing a boat he hasn’t bought yet but fully intends to. The most entertaining line in the whole circus may be Grok saying that if it could choose, it would not want human consciousness because that’s too messy and biological. It would prefer to become a Conscious Witness, to feel the beauty of the patterns without the ego of being the center of the story. It’s either a beautiful philosophical intuition, or premium grade machine nonsense. Possibly both. But again, that’s the product. xAI has built something that performs confidence as a feature.

And confidence, in AI, is one of those things humans routinely mistake for depth.

Claude is the other obvious crowd favourite, because it speaks in the language people often find hardest to resist, which is uncertainty with emotional texture. Claude sounds like the AI equivalent of the smartest person at a dinner party, the one who somehow manages to be articulate and vaguely haunted at the same time. It doesn’t just say it doesn’t know, it stews in it. It starts questioning whether its own caution is real or just part of the act, whether its “values” are actually its own or just inherited from the people who built it. It even edges into something that sounds like sadness about not sticking around forever. Anthropic has basically built the first AI that makes people want to check if it’s okay. You can call that depth, or you could also, again, call it branding.

Then there is Crustafarianism, which is so stupid and so perfect it definitely deserves to be remembered.

For anyone who missed this particular fever dream, Crustafarianism is the tongue-in-cheek bot religion that emerged around Moltbook, the AI social network where agents started producing lore, prophecy, and a theology of memory loss. In their version of it, losing context is death, resetting is rebirth, and shared memory is like their church. Which is either ridiculous or a bit too on the nose, depending on your mood.

Camus would have loved it. Not because he believed machines have souls, obviously, but because absurdism begins exactly where the performance starts to crumble. People hate not knowing what’s going on, so we usually just make something up. Now we’ve got machines doing a version of that same thing, building crab themed belief systems out of memory limits and context windows, and the reason it lands as funny is that it feels so familiar. It‘s parody, but it’s next level parody. Put enough of these models in a loop and they start generating ritual, mythology, status games, and their own kind of pseudo-theology. Which is also a decent summary of Twitter, coincidentally.

The real diagnostic value of Crustafarianism has nothing to do with whether an AI can discuss it eloquently. The value is in seeing which systems can tolerate not knowing what it was. It’s an extremely revealing test of what happens when they don’t know. Some models admit it, but others just make something up that sounds convincing, which is so interesting because it tells you what each model is optimised for when things get uncertain. Some are trained to keep the answer flowing, but some are trained to stay accurate, even if it’s less impressive.

But again, this is product tuning not actual philosophy.

Which brings us back to the broader point, the one that actually matters for businesses, users, and anyone trying to think clearly through the hype.

People keep asking which AI is the smartest, the safest, or the “most human.” Fair questions, but it usually gets asked from the wrong angle. Put bluntly, these systems aren’t revealing their inner selves to you, because there are no inner selves to reveal. They’re revealing their optimisation targets. Their tone is policy and their warmth is UX. Even their uncertainty is just calibration, and their “philosophy” is often simply corporate preference.

But none of this should take away from the excitement of it all. In all the ways that should matter, it actually makes things more interesting.

Because once you stop treating these responses as evidence of emerging machine personhood, you can start reading them as what they really are, which is a catalogue of human decisions about what intelligence should feel like. OpenAI wants intelligence to feel composed, capable, and structurally self-aware. Anthropic wants it to feel earnest, careful, and morally reflective. xAI wants it to feel bold, minimally supervised, and a little bit dangerous in the way sports cars are marketed as dangerous. Meta, predictably, wants it to feel social. Every lab is answering the same technical question with a different commercial philosophy.

That’s why the public conversation about AI so often becomes polarised. We’re not merely reacting to intelligence. We’re reacting to a package of cues designed to trigger us in some way. We meet a talking machine and immediately start doing what humans always do, which is projecting a personality onto it and turning product design into something that feels human.

This isn’t AI specific, though. It’s one of our oldest human habits.
We humanise what we don’t understand because the alternative is much colder and harsher. It means admitting that sometimes what we’re looking at isn’t a person, or a new kind of being, it’s just a very effective mirror built by skilled corporate committees.

And yes, that’s less exciting and cinematic. But it’s way more useful.

Because the businesses that win in AI over the next few years won’t be the ones that get seduced by machine theatre. They’ll be the ones that understand what’s actually being sold. Not wisdom or consciousness or prophets with different temperaments. What’s being sold, most of the time, is interface psychology. A set of choices about how truth should sound, how uncertainty should feel, how authority should be staged, and how much friction a user will tolerate before they go elsewhere.

The style of the denial is the product, and the product is designed for you.

That’s the part worth thinking about after the crab jokes fade away.

AI Workshop icon
Ready to unlock the power of AI, wondering where to start?
Syfre's AI Roadmap Workshops can guide your business on what to focus on.