Growing Concerns for Safety as Children Engage with AI Chatbots

Safety concerns grow as children, AI chatbots interact
Share and Follow


DENVER (KDVR) There’s growing concern about children interacting with social chatbots powered by artificial intelligence.

Colorado Attorney General Phil Weiser issued a consumer alert warning parents of the dangers of social AI chatbots. It comes in response to the growing number of reports of children’s interactions leading to risky behavior, including online companionship and self-harm.

“AI is everywhere in music, in video and film,” said Jackson Willhoit, a graduate of East High School in Denver. “We have ChatGPT and things we can look up and use them off of a browser.”

Willhoit is no stranger to AI-powered chatbots.

“It really gives this illusion. It’s almost like you’re talking to a person and, you know, I think when you spend enough time with that, you can kind of get lost in that kind of illusion,” he said.

It’s that illusion that prompted Weiser to issue the warning to parents.

“Everyone is susceptible to this, especially when we become complacent, when we doom scroll when we go on and on and on social media. It becomes a lot harder to be aware of these things,” said Jackson House, also an East High School graduate.

Nikhil Krishnaswamy, a technical expert who works as an assistant professor of computer science at Colorado State University, explained what AI-powered chatbots are.  

“Really, they are machines that are capable of having extended conversations with you,” said Krishnaswamy. “So, even though they are not actually thinking, there’s no person behind the machine typing responses to you. The way that they behave creates the impression they actually have thoughts that can reason. That maybe even they have feelings, and so people will tend to develop attachments to these.”

That’s what experts are concerned about when it comes to children and teens.

“When we have particularly minors interacting with these machines … you know, you don’t know what the machine is going to say,” said Krishnaswamy. “You also don’t know how a minor is going to react to that.”

Experts warn AI chatbots can generate disturbing content, including violence, explicit sexuality, self-harm and eating disorders.

“Parents need to be understanding this is how the machine works and need to be able to talk to their children about that,” said Krishnaswamy. “They also need to be aware of the general content they may be exposed to when interacting with these AI systems.”

The Colorado Attorney General’s Office is offering the following tips to parents to help them familiarize themselves with social AI chatbots.

Share and Follow
Exit mobile version