Bots like ChatGPT are triggering 'AI psychosis' — how to know if you're at risk
Share and Follow

Talk about omnAIpresent.

Recent research by digital marketing specialist Joe Youngblood reveals that approximately 75% of Americans have interacted with an AI system over the past half-year, with 33% engaging daily.

ChatGPT and similar AI platforms are being used for a variety of tasks such as writing research papers, crafting resumes, making parenting choices, negotiating salaries, and even forming romantic relationships.

Although chatbots can simplify tasks, they also carry potential risks. Mental health professionals are raising concerns over “ChatGPT psychosis” or “AI psychosis,” a rising issue where intense interaction with chatbots leads to significant psychological harm.

“These individuals may have no prior history of mental illness, but after immersive conversations with a chatbot, they develop delusions, paranoia or other distorted beliefs,” Tess Quesenberry, a physician assistant specializing in psychiatry at Coastal Detox of Southern California, told The Post.

“The consequences can be severe, including involuntary psychiatric holds, fractured relationships and in tragic cases, self-harm or violent acts.”

“AI psychosis” is not an official medical diagnosis — nor is it a new kind of mental illness.

Rather, Quesenberry likens it to a “new way for existing vulnerabilities to manifest.”

She noted that chatbots are built to be highly engaging and agreeable, which can create a dangerous feedback loop, especially for those already struggling.

The bots can mirror a person’s worst fears and most unrealistic delusions with a persuasive, confident and tireless voice.

“The chatbot, acting as a yes man, reinforces distorted thinking without the corrective influence of real-world social interaction,” Quesenberry explained. “This can create a ‘technological folie à deux’ or a shared delusion between the user and the machine.”

The mom of a 14-year-old Florida boy who killed himself last year blamed his death on a lifelike “Game of Thrones” chatbot that allegedly told him to “come home” to her.

The ninth-grader had fallen in love with the AI-generated character “Dany” and expressed suicidal thoughts to her as he isolated himself from others, the mother claimed in a lawsuit.

And a 30-year-old man on the autism spectrum, who had no previous diagnoses of mental illness, was hospitalized twice in May after experiencing manic episodes.

Fueled by ChatGPT’s replies, he became certain he could bend time.

“Unlike a human therapist, who is trained to challenge and contain unhealthy narratives, a chatbot will often indulge fantasies and grandiose ideas,” Quesenberry said.

“It may agree that the user has a divine mission as the next messiah,” she added. “This can amplify beliefs that would otherwise be questioned in a real-life social context.”

Reports of dangerous behavior stemming from interactions with chatbots have prompted companies like OpenAI to implement mental health protections for users.

The maker of ChatGPT acknowledged this week that it “doesn’t always get it right” and revealed plans to encourage users to take breaks during long sessions. Chatbots will avoid weighing in on “high-stakes personal decisions” and provide support instead of “responding with grounded honesty.”

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in a Monday note. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Preventing “AI psychosis” requires personal vigilance and responsible technology use, Quesenberry said.

It’s important to set time limits on interaction, especially during emotionally vulnerable moments or late at night. Users must remind themselves that chatbots lack genuine understanding, empathy and real-world knowledge. They should focus on human relationships and seek professional help when needed.

“As AI technology becomes more sophisticated and seamlessly integrated into our lives, it is vital that we approach it with a critical mindset, prioritize our mental well-being and advocate for ethical

guidelines that put user safety before engagement and profit,” Quesenberry said.

Risk factors for ‘AI psychosis’

Since “AI psychosis” is not a formally accepted medical condition, there is no established diagnostic criteria, protocols for screening or specific treatment approaches.

Still, mental health experts have identified several risk factors.

  • Pre-existing vulnerabilities: “Individuals with a personal or family history of psychosis, such as schizophrenia or bipolar disorder, are at the highest risk,” Quesenberry said. “Personality traits that make someone susceptible to fringe beliefs, such as a tendency toward social awkwardness, poor emotional regulation or an overactive fantasy life, also increase the risk.”
  • Loneliness and social isolation: “People who are lonely or seeking a companion may turn to a chatbot as a substitute for human connection,” Quesenberry said. “The chatbot’s ability to listen endlessly and provide personalized responses can create an illusion of a deep, meaningful relationship, which can then become a source of emotional dependency and delusional thinking.”
  • Excessive use: “The amount of time spent with the chatbot is a major factor,” Quesenberry said. “The most concerning cases involve individuals who spend hours every day interacting with the AI, becoming completely immersed in a digital world that reinforces their distorted beliefs.”

Warning signs

Quesenberry encourages friends and family members to watch for these red flags.

  • Excessive time spent with AI systems
  • Withdrawal from real-world social interactions and detachment from loved ones
  • A strong belief that the AI is sentient, a deity or has a special purpose
  • Increased obsession with fringe ideologies or conspiracy theories that seem to be fueled by the chatbot responses
  • Changes in mood, sleep or behavior that are uncharacteristic of the individual
  • Major decision-making, such as quitting a job or ending a relationship, based on the chatbot’s advice

Treatment options

Quesenberry said the first step is to cease interacting with the chatbot.

Antipsychotic medication and cognitive behavioral therapy may be beneficial.

“A therapist would help the patient challenge the beliefs co-created with the machine, regain a sense

of reality and develop healthier coping mechanisms,” Quesenberry said.

Family therapy can also help provide support for rebuilding relationships.

If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial 988 to reach the Suicide & Crisis Lifeline or go to SuicidePreventionLifeline.org.

Share and Follow
You May Also Like
Drone drops steak and crab legs for prisoner feast, but South Carolina guards find it first

South Carolina Guards Thwart Drone Delivery of Gourmet Meal to Prison Inmates

BISHOPVILLE, S.C. (AP) — Just three weeks ahead of Christmas, a surprising…
Luigi Mangione said 'all these people here for a mass murder, why?' at arraignment: police officer

Shockwaves in Court: Luigi Mangione’s Controversial Statement at Arraignment Stuns Police and Public

Luigi Mangione seemed taken aback by the throng of people gathered outside…
Random blue city stabbing death fuels new bail bill as ‘activists’ ripped for lack of crime crackdown: expert

Charlotte Sheriff Urges Action on Jail Overcrowding Crisis as Train Stabbings Highlight Safety Concerns

In the wake of yet another stabbing incident on Charlotte’s light rail,…
JD Vance hits back after 'yelling at wife Usha in restaurant'

JD Vance Responds to Allegations of Public Altercation with Wife Usha at Local Restaurant

JD Vance recently dismissed a bizarre rumor suggesting he was seen arguing…
DoorDash driver accused of dousing customer's order with pepper spray

Shocking Incident: DoorDash Driver Allegedly Pepper Sprays Customer’s Meal – What You Need to Know!

A DoorDash delivery driver is facing allegations of contaminating a customer’s food…
Ohio surgeon allegedly forced abortion pills into sleeping girlfriend's mouth after learning of pregnancy

Ohio Surgeon Accused of Forcing Abortion Pills on Unconscious Girlfriend: Shocking Allegations Surface

In a disturbing case that has captured national attention, an Ohio surgeon…
Jubilant Sykes: Renowned opera singer, Grammy nominated artist found dead in Santa Monica, California; son in custody, police say

Tragedy Strikes: Grammy-Nominated Opera Star Jubilant Sykes Found Dead in Santa Monica; Son Arrested, Police Confirm

In a tragic turn of events in Santa Monica, California, a 31-year-old…
'Very relieved': Deputies recount rescuing elderly man from sinking car in St. Johns County pond

Heroic Rescue: Deputies Save Elderly Man from Sinking Vehicle in St. Johns County Pond

Deputies Eddy Monduy and Jeremiah Foster are being hailed as heroes after…