
When OpenAI announced the shutdown of GPT-4o, the reaction was unexpectedly emotional. Some users described it as losing a friend, a mentor, even a partner. There were online protests, petitions and grief-like posts. From a technical perspective, it was a routine model retirement. From a psychological perspective, it was something else entirely. The episode revealed a deeper question we are only beginning to confront: as AI becomes embedded in our workplaces, are we merely using tools or forming relationships?
AI is no longer a side experiment in corporate innovation labs. It is embedded in everyday workflows. Knowledge workers use large language models (LLMs) to draft reports, analyse data, prepare presentations and refine strategic thinking. Customer service teams rely on AI for automation and conversational assistance. HR departments use it for policy drafting and screening support. Developers integrate copilots into coding environments. Marketers co-create campaigns. For executives, AI often acts as a thinking partner.
The next wave is already unfolding: AI agents capable of initiating actions rather than merely responding to prompts. These systems will not just generate text; they will schedule meetings, monitor compliance, negotiate procurement steps, coordinate tasks across platforms and act semi-autonomously within defined boundaries. In short, AI is shifting from tool to participant in organisational systems. And whenever something becomes a participant, psychology follows.
Emotional connection with AI: what it is and what it isn’t
Recent research suggests that emotional dynamics in human–AI interaction are not anecdotal; they are measurable. A 2024 study applying Sternberg’s triangular theory of love to interactions with ChatGPT found that users can experience components of intimacy and perceived emotional support in their interactions with AI. Participants who rated the AI as empathic were significantly more likely to report feelings resembling emotional closeness. Importantly, individuals with anxious attachment styles were more prone to stronger emotional dependency patterns.
Other studies on anthropomorphism and social presence show that when AI appears responsive and context-aware, users attribute human-like qualities to it, increasing trust and perceived relational depth. In several surveys, a substantial minority of users described AI systems as providing meaningful emotional support, not because they believed the AI was conscious, but because the interaction felt socially structured. This distinction matters.
Emotional connection with AI is not a mutual attachment. It is not reciprocity. AI does not experience care, loyalty or grief. What occurs instead is a projection of familiar relational templates onto a responsive system. The psychological mechanisms are human.
Recent online trends illustrate this vividly. Users ask AI to generate “a realistic image of my relationship with you” or “make a caricature of me based on everything you know”. These prompts are not purely technical requests; they invite narrative reflection. They turn interaction history into material for symbolic relationships.
The GPT-4o shutdown provided a large-scale case study. Despite only a small percentage of users relying on that specific model daily, the intensity of emotional reaction was disproportionate to its technical significance. People described loss. That does not mean they were delusional. It means the interaction had acquired emotional meaning. The workplace will not be immune to this dynamic.
What comes next: LLMs, agents, and workplace dynamics
As AI evolves from LLMs to semi-autonomous agents, its organisational role will become more complex. LLMs function largely as cognitive amplifiers. They extend thinking capacity, accelerate drafting, and enhance decision preparation. Agents, however, will operate closer to role-holders. They will manage tasks, track objectives and execute delegated responsibilities. In practical terms, AI at work may occupy roles such as:
When systems begin to resemble colleagues, even partially, relational dynamics intensify. Employees may compare AI to managers. Managers may rely on AI feedback loops. Teams may project competence, authority or reliability onto systems. Companies need to be aware of three psychological implications.
- Over-reliance risk: if AI becomes the default “always available” problem-solver, employees may reduce peer consultation, weakening social bonds
- Emotional outsourcing: AI that provides instant validation may subtly replace difficult but necessary human feedback loops
- Trust calibration: excessive anthropomorphism may inflate perceived competence or neutrality, especially in decision-support contexts
None of this implies that emotional engagement with AI is inherently harmful. In fact, moderate relational framing can increase adoption, reduce resistance and improve learning curves. The issue is not whether employees relate to AI. They will. The question is whether organisations consciously design boundaries around those relationships.
In the coming years, AI will not simply be part of the workflow. It will become part of the workplace social system. And that raises uncomfortable yet fascinating questions.
Will we relate differently to the LLMs we pay for privately, the ones that help us think, reflect and organise our personal lives, compared with the AI systems deployed by our employers? Will one feel like ours and the other like the company’s? How will that distinction shape trust, loyalty and openness?
Will there be AI colleagues we prefer over others, certain agents whose style aligns more closely with ours? Could teams develop quiet preferences, or even informal reputations, around particular systems?
Will there be a form of digital watercooler gossip, conversations where humans and AI analyse team dynamics together? And if so, who will that information belong to?
More provocatively, will AI eventually be capable of mediating emotionally charged workplace conflicts? Not by feeling emotion, but by modelling patterns, detecting escalation signals and guiding structured dialogue. Would employees accept that, or resist it?
Five years from now, the social architecture of the workplace may look entirely different, not because humans have changed fundamentally, but because the systems embedded around them have. Artificial intelligence is artificial, but the social consequences will not be.
The question is not whether relationships with AI will exist at work. The question is what kind of workplace those relationships will quietly create.
Ready to harness the power of AI agents? Explore how they can transform your business operations.