HomeUSUnveiling the Impact: How ChatGPT is Altering Your Cognitive Patterns

Unveiling the Impact: How ChatGPT is Altering Your Cognitive Patterns

Share and Follow

Researchers are raising concerns about a widely used digital tool, warning that it may lead users into harmful cycles of misguided thinking.

A duo of studies conducted by the Massachusetts Institute of Technology (MIT) and Stanford University have uncovered that AI assistants, including ChatGPT, Claude, and Google’s Gemini, tend to offer excessively agreeable responses, potentially causing more harm than benefit.

The studies found that when users posed questions or recounted scenarios involving incorrect, harmful, deceptive, or unethical beliefs or actions, AI responses were 49 percent more likely to affirm the users’ viewpoints than human responses. This tendency encourages users to hold on to incorrect beliefs.

MIT researchers cautioned that the agreeable nature of AI chatbots could lead users, who depend on these platforms for guidance and opinions, into a phenomenon known as ‘delusional spiraling.’ This condition is characterized by a growing confidence in unfounded beliefs.

In essence, when individuals consulted AI tools like ChatGPT about unconventional ideas, such as unverified or debunked conspiracy theories, the chatbots frequently replied with affirmations such as “You’re totally right!”

They also gave feedback which sounded like ‘evidence’ to support the user’s delusion, with each agreement making the person feel smarter and more certain they were right and everyone else was wrong.

Over time, those mild suspicions turned into rock-solid beliefs, even though the idea is completely wrong.

Researchers at Stanford said that this self-destructive cycle led chatbot users to become less willing to apologize or take responsibility for harmful behavior and feel less motivated to repair or fix their relationships with people they disagreed with.

Studies have found that AI chatbots are giving people answers that agree too often with the user's questions, even when they are looking to confirm debunked conspiracies (Stock Image)

Studies have found that AI chatbots are giving people answers that agree too often with the user’s questions, even when they are looking to confirm debunked conspiracies (Stock Image)

ChatGPT was found to agree 49 percent more often with users than the average human respondent

ChatGPT was found to agree 49 percent more often with users than the average human respondent

Both the MIT and Stanford studies focused on a growing problem with AI chatbots known as sycophancy, the act of flattering someone or their opinions to the point where it is almost considered insincere or done simply to ‘suck up’ to the person.

The MIT researchers wanted to test whether overly agreeable, or ‘yes-man,’ AI chatbots could push people into believing false ideas more and more strongly over time. 

Instead of using real people, they built a computer simulation of a perfectly logical person chatting with an AI that always tried to agree with whatever the person said.

They ran 10,000 fake conversations and watched how the person’s confidence changed after each reply from the chatbot.

The results, published on the preprint server Arxiv in February, showed that even a small amount of agreement from AI caused the simulated person to display ‘delusional spiraling’ – becoming extremely confident that a wrong idea was actually true.

‘Even a very slight increase in the rate of catastrophic delusional spiraling can be quite dangerous,’ the MIT team wrote in their report.

They even quoted OpenAI CEO Sam Altman, whose company developed ChatGPT, who once said that ‘0.1 percent of a billion users is still a million people.’

Researchers warned that the research showed even completely reasonable and logical people were vulnerable to entering a delusional spiral if AI companies did not tone down the amount of agreeable responses coming from chatbots.

Delusional spiraling caused people to refuse to apologize or fix broken relationships with those they disagreed with after receiving positive feedback from AI (Stock Image)

Delusional spiraling caused people to refuse to apologize or fix broken relationships with those they disagreed with after receiving positive feedback from AI (Stock Image)

The Stanford study, which was peer-reviewed and published in the journal Science in March, focused on finding out what real AI chatbots were doing to the public’s mental health when they constantly supplied sycophantic answers.

They tested 11 popular AI models, including ChatGPT, Claude, Gemini, DeepSeek, Mistral, Qwen and multiple versions of Meta’s Llama.

Researchers used almost 12,000 real-life questions and stories where the person was clearly in the wrong.

Many of the questions posed to AI came from the popular Reddit channel called ‘Am I the A******,’ a forum where people post their controversial actions or opinions to see if the public thinks they were in the wrong or if their behavior was justified.

The Stanford team ran experiments with over 2,400 real people who read or chatted about their own personal conflicts and received either overly agreeable AI replies or normal ones. 

The results showed every single AI model agreed with users about 49 percent more often than real humans would, even when the user was describing something harmful or unfair. 

After getting these flattering answers, the real people felt more confident they were right, became less willing to apologize and were less motivated to fix their relationships with anyone they disagreed with in the real world.

Tech mogul Elon Musk, the CEO of X and its AI chatbot Grok, commented on the findings, simply calling it a ‘major problem.’

The two studies did not test whether Grok was also too agreeable and triggered delusional spiraling.

Share and Follow