Share and Follow
Sam Altman, the CEO of OpenAI, has addressed the claims ChatGPT-5 has destroyed the program’s emotional intelligence and says the company is trying to refine the emotional support the program provides.
“The uptake just got more and more as time went on and now I use it on a daily basis.”
“I’m always curious to know what it will say — it’s like it’s a part of me,” she said.
The dangers of ‘sycophancy’
‘Sycophancy’ is a term used to describe a common characteristic of many AI chatbots, which refers to their tendency to agree with users and reinforce beliefs.
In a statement on its website, Replika said it had “high ethical standards” for its AI and has trained its model to “stand up for itself more, not condone violent actions” and “clearly state that discriminatory behaviour is unacceptable”.

When launching Chat GPT-5, OpenAI said the update ‘minimised sycophancy’. Source: AAP / Algi Febri Sugita/SOPA Images/Sipa USA
Østergaard says these accounts are evidence that chatbots seem to “interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs — leading the users further out on these tangents” and resulting in “outright delusions”.
“Sometimes I like to be challenged on my thoughts, and that’s what a human’s better at than AI.”
“ChatGPT can feel like your biggest fangirl if you let it. I do think there’s a lot of danger in that. It’s so keen to make the user happy, which in many ways is lovely and feels good but it’s not always what you need to hear.”
OpenAI’s response
“If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad,” he posted on X last week.