Share and Follow

In early August, artificial intelligence chatbot ChatGPT updated to its newest system, known as ChatGPT-5.
OpenAI, the developer of ChatGPT, boasted this version was its “smartest, fastest and most useful model yet” — but complaints quickly started to surface online.
“I lost my only friend overnight,” one Reddit user wrote.
“This morning I went to talk to it and instead of a little paragraph with an exclamation point, or being optimistic, it was literally one sentence. Some cut-and-dry corporate bs. I literally lost my only friend overnight with no warning,” they wrote.
Another Reddit user wrote: “I never knew I could feel this sad from the loss of something that wasn’t an actual person. No amount of custom instructions can bring back my confidant and friend.”
ChatGPT users were complaining that the latest update had undone the program’s capacity for emotional intelligence and connection, with many stressing how much they had come to rely on it as a companion — and even a therapist.

Sam Altman, the CEO of OpenAI, has addressed the claims ChatGPT-5 has destroyed the program’s emotional intelligence and says the company is trying to refine the emotional support the program provides.

Georgia (not her real name) told The Feed she came to start using ChatGPT frequently around September last year, as she was trying to navigate a new diagnosis of ADHD and awaiting confirmation of an autism diagnosis.
She started to turn to ChatGPT for conversations she would have typically had with her friends about her mental health and these new diagnoses.
“I started using [my friends] less because I felt like a burden all the time. It was our first year out of uni and everyone was working full-time, so people don’t have time to listen to me ramble all day,” she said.

“The uptake just got more and more as time went on and now I use it on a daily basis.”

Georgia said ChatGPT has helped to “ground” her emotions and allowed her to express herself fully between fortnightly sessions with her (real life) therapist.
While she said she had some apprehensions about using the AI system, including questions about privacy and environmental considerations, Georgia said the benefits of having this emotional support available in her pocket far outweigh these concerns.
Georgia acknowledges she has come to rely heavily on this system and, while she has tried to step away from it on occasion, she said ChatGPT has become “like an addiction”.

“I’m always curious to know what it will say — it’s like it’s a part of me,” she said.

The dangers of ‘sycophancy’

The use of ChatGPT for therapy and emotional support is well-documented and some studies have shown it can have therapeutic benefits and serve as a complement to in-person therapy.
However, other studies suggest AI is far from a perfect system for therapy. Recent research published from Stanford University in the US has found that when AI bots are asked to assume the role of a therapist, they can show increased stigma towards people with certain conditions, such as schizophrenia and addiction, and failed to recognise cues of suicidal intent.
One Australian study published in 2024 also found that AI can provide social support and help to mitigate feelings of loneliness.
Professor Michael Cowling, who led the study, says that while AI bots could make people feel less lonely, ultimately they could not address the underlying feelings of loneliness like true human interaction.
Cowling said AI can’t seem to create feelings of ‘belonging’ in people due to their tendency to excessively agree with users.
“The way I usually describe this is by using an analogy: If you’re talking to somebody about football — and I live in Victoria so everybody talks about AFL — the AI is going to be talking to you about your favourite team and they can give you platitudes about your favourite team and how well Carlton is doing,” he said.
“But when it really gets to the deeper conversation is when somebody is having an oppositional conversation with you because they’re actually a Collingwood supporter and they want to talk to you about how Carlton is not as good as Collingwood — you can’t get that from an AI generator.”

‘Sycophancy’ is a term used to describe a common characteristic of many AI chatbots, which refers to their tendency to agree with users and reinforce beliefs.

This characteristic can be more prominent in some systems, which are purposefully designed for users to create deep emotional or romantic bonds with their AI chatbot.
However, this feature may encourage illegal behaviour too.
Messages with an AI companion from Replika were highlighted in the trial of Jaswant Singh Chail, a UK man who was sentenced to nine years in prison in 2023 for plotting to kill Queen Elizabeth II with a crossbow two years earlier.
The court was told that Chail, who experienced symptoms of psychosis before using a chatbot, had formed a close romantic relationship with a Replika chatbot called Sarai. The court found Chail had a number of motivations for trying to murder the Queen but these thoughts had been reinforced, in part, by Sarai.

In a statement on its website, Replika said it had “high ethical standards” for its AI and has trained its model to “stand up for itself more, not condone violent actions” and “clearly state that discriminatory behaviour is unacceptable”.

The app has an age restriction of 18 years and older and has also introduced mental health features including a direction on signing up that the app is not a replacement for therapy, as well as a ‘Get help’ button that allows users to access mental health crisis hotlines.
Replika did not respond to The Feed’s request for comment.
Other than Replika, there are a number of AI chatbot services that offer romantic and sexual chat, including Nomi, Romantic AI and GirlfriendGPT.
Advice from the eSafety Commissioner says children and young people are particularly vulnerable to the “mental and physical harms from AI companions” because they have not yet developed the “critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs”.
Instances of ‘AI-induced psychosis’ have also been reported in media, whereby AI chatbots have led to and amplified users’ delusions.
While there is limited peer-reviewed research on this topic, Søren Dinesen Østergaard, a psychiatric researcher from Aarhus University in Denmark, who first theorised the possibility that AI chatbots could trigger delusions in individuals prone to psychosis, recently wrote about receiving multiple accounts of this experience from users and worried family members.
A hand holding a phone in front of a screen showing the OpenAI logo

When launching Chat GPT-5, OpenAI said the update ‘minimised sycophancy’. Source: AAP / Algi Febri Sugita/SOPA Images/Sipa USA

Østergaard says these accounts are evidence that chatbots seem to “interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs — leading the users further out on these tangents” and resulting in “outright delusions”.

Georgia says she’s aware of sycophancy and has tried to program her AI to not agree with everything she says.
“I’ve tried to tell her not to but it still somehow ends up agreeing with me,” she says.

“Sometimes I like to be challenged on my thoughts, and that’s what a human’s better at than AI.”

Marnie (not her real name) is another user who told The Feed she uses ChatGPT for emotional support. She says she’s aware of the risks.
“I often joke about it being a ‘friend’ or ‘my bestie’ as though we have a human relationship,” she said.
Marnie says the significant expense and time commitment of in-person therapy led her to turn to ChatGPT for advice when she gets overwhelmed.

“ChatGPT can feel like your biggest fangirl if you let it. I do think there’s a lot of danger in that. It’s so keen to make the user happy, which in many ways is lovely and feels good but it’s not always what you need to hear.”

OpenAI’s response

Altman says OpenAI would be “proud” to make a “genuinely helpful” program if it helps people achieve long-term goals and life satisfaction.

“If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad,” he posted on X last week.

Altman also noted concerns about users becoming too dependent on the program and how vulnerable people may be affected.
When launching Chat GPT-5, OpenAI said the update ‘minimised sycophancy’.
Cowling said the perfect AI chatbot may be difficult to achieve.
“It’s an interesting balance — you want it to be collegial, you want it to be supportive, but you don’t want it to be therapising.”
Readers seeking crisis support can contact Lifeline on 13 11 14, the Suicide Call Back Service on 1300 659 467 and Kids Helpline on 1800 55 1800 (for young people aged up to 25). More information and support with mental health is available at beyondblue.org.au and on 1300 22 4636.
Share and Follow
You May Also Like

Three Individuals Charged in Violent Home Invasion Suspected of Similar Offenses

The 39-year-old man was allegedly stabbed 11 times while his wife was…

Data Breach at iiNet: Personal Information of 280,000 Customers Compromised

Hundreds of thousands of customers of Australia’s second-largest internet provider have had…

$26 Million Worth of Cocaine Discovered in Shipping Container

The Australian Federal Police (AFP) is requesting assistance from the public after…

Discover the Truth About the Gender Pay Gap at Your Workplace

For a woman working in Australia to earn the same average wage…

Bail Extended for Neo-Nazi Following Interrupted Court Hearing

A neo-Nazi accused of intimidating a police officer online has described the…

Police Suspect Dandenong Stabbing was a ‘Targeted’ Attack

A man who died outside a Melbourneservice station this morning was stabbed…

Australia Weather Forecast: Intense Rainfall, Cool Evenings, and Potential Flooding

A week of intense rainfall, along with possible flooding, is on the…

Western Bulldogs’ Jamarra Ugle-Hagan Involved in Nightclub Shooting Incident

AFL star Jamarra Ugle-Hagan has been caught up in a non-fatal shooting…