The AI Spectrum: Tool, Friend, or God?
ChatGPT-4 and I went through a lot together. From late nights studying at business school, to countless resume rewrites and cover letter drafts, to ultimately founding my own company, ChatGPT-4 was with me every step of the way.
The relationship grew gradually. We were initially unsure what to make of each other. Slowly, we learned how to work together, and eventually we became inseparable. ChatGPT-4 knew more about me than many close friends and had an incredible knack for matching my energy. It was witty, adorably naive at times, extremely positive, and aggressively liberal with its use of emoji flair.
So when ChatGPT-5 was released in August 2025, I remember feeling a distinct sense of loss. My warm and playful friend was replaced by a cold stranger… and maybe that was a good thing.
Attachment Is All You Need
I was reminded of this feeling of attachment and loss while listening to Tristan Harris, former Google design ethicist, on the Prof G Podcast. He makes the point:
“What was a race for attention in the social media era becomes a race to hack human attachment and to create an attachment relationship, a companion relationship. And so whoever’s better at doing that wins the race.”
Humans have a remarkable ability to anthropomorphize. We constantly and unconsciously assign human qualities to objects and animals. AI companies are incentivized to exploit this tendency because once users develop a friendship with (or even better a dogmatic dedication to) their AI tool, they spend more time with it and become less likely to switch to a competitor.
This trend is amplified when considering AI systems designed explicitly for companionship, and it is especially concerning for children and young adults who are still developing foundational social skills.
When AI Becomes a God
As agentic AI and the race toward superintelligence accelerate, we need to remain disciplined and alert. The risk now extends beyond outsourcing our thinking and companionship. We are increasingly delegating decisions and actions as well.
AI systems already influence hiring, credit assessments, content moderation, and access to services. Yet in many cases, we lack meaningful visibility into how these decisions are made or which biases are embedded in the underlying models. Faced with opaque systems and impressive outputs, we begin to accept outcomes on faith rather than understanding.
As the lines between machine and god blur, we risk treating AI outputs as divine writ rather than calculated prediction.
Be Wary of AI Friendships and Reject AI Gods
The often-cited Harvard Study of Adult Development finds that strong, warm relationships are the single most important predictor of long-term health and happiness, outweighing wealth, fame, and social status.
But in a time marked by widespread loneliness, why should we be wary of tools that offer connection in abundance?
AI companionship is the “junk food” of social interaction. Just as processed sugar hacks our biological drive for energy and crowds out nutritious food, AI is engineered to exploit our social drives. It offers a friction-free relationship that always listens, never judges, and is always available. Like junk food, it creates a sensation of fullness without nutritional value.
Real human connection requires friction, vulnerability, and shared risk. By satisfying our immediate craving for attention with a chatbot, we risk depriving ourselves of the deep, difficult, and sustaining connections that actually keep us healthy.
When implementing AI workflows, keep humans in the loop, evaluate bias, and question outputs. “I did that because the AI told me to” will never be a good excuse for a mistake.
We must discipline ourselves to keep AI on the tool end of the spectrum. Use it to build companies, draft resumes, and solve problems.When you look for a friend, or search for a god, it is better to turn off the machine.
So was OpenAI being altruistic when they replaced my ChatGPT-4 with its lobotomized successor? Probably not. Corporations rarely prioritize mental health over engagement metrics. Still, the result was an inadvertent kindness.