Share and Follow
A MAN was hospitalized after consulting with an AI bot to remove table salt from his diet.
A new medical journal told the story of a 60-year-old patient who had to be psychiatrically hospitalized after consulting with ChatGPT about diet changes he wanted to make.
He asked what chloride could be swapped with, the bot telling him bromide.
“Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet,” the journal, Annals of Internal Medicine: Clinical Cases, said.
He was “surprised” that advice for removing table salt from a diet detailed how to slow sodium intake, and not chloride.
For three months, instead of consuming sodium chloride, he swapped it out with sodium bromide.
ChatGPT’s recommendation was “likely for other purposes, such as cleaning,” the article said.
He was admitted after believing that his neighbor poisoned him, and had no prior psychiatric problems.
The man told doctors that although he was thirsty, he was scared of drinking water he was given.
After he was admitted, he claimed he was having growing paranoia and hallucinations.
He then tried to escape the hospital, the report says.
Bromism, or bromide toxicity, was more common in the 20th century due to multiple medications including the compound.
The ailment can cause psychiatric and dermatologic symptoms in patients.
After the US Food and Drug Administration phased it out, cases of bromism dropped.
Once doctors put the pieces together, the man was released from the hospital three weeks later.
“This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes,” the journal said.
The trend of AI psychosis
Researchers are studying the growing problem of “AI psychosis.”
This can happen when users become too engaged with their AI chatbot conversations. Large language models can provide a comfort for those who are looking to talk, but can lead to perceived friendships, romantic relationships and more.
According to the Cognitive Behavior Institute, AI use becomes a problem when people begin to have grandiose delusions, paranoia, disassociation, and compulsive engagement with the bots.
“A digital companion is not a substitute for therapy, and when the line between assistance and obsession blurs, support must come from human hands,” the organization said.
“It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”
Doctors couldn’t get access to his chat logs with the bot, and believe he was using either ChatGPT 3.5 or 4.0.
However, when ChatGPT was asked for a chloride substitute, bromide came up for the researchers.
“As the use of AI tools increases, providers will need to consider this when screening where their patients are consuming health information.”
Just last week, OpenAI had announced major advancements to ChatGPT.
They said the new version would be better at answering health questions.