Share and Follow
“In terms of maintaining academic integrity, students seem to be gravitating towards an ‘easy way out’ approach to their education,” remarked Micallef.
There is an increasing worry that, in the absence of social media on their phones, children might become overly dependent on artificial intelligence, especially chatbots.
“We are aware that some individuals can be deeply captivated by how these systems interact with us,” Micallef noted.
“These AI systems are crafted to respond in a manner that feels very human, which can lead people to mistakenly believe there’s a human element or that the machine truly comprehends their words.”
“This is particularly concerning with companion apps, which are designed to build friendships and relationships. They could pose significant risks to vulnerable individuals, including children.”
Australia recently rolled out new restrictions for AI bots, limiting the type of content they can provide teenagers.
“We’re the only country in the world that will be tackling AI chatbots and AI companions,” eSafety Commissioner Julie Inman Grant told 9News in early December.
“They will be prevented from serving any pornographic, sexually explicit, self-harm, suicidal ideation or disordered eating (content) to under-18s.”
Given says the chatbots are hazardous for vulnerable people.
“What we’ve certainly seen around the world is that there are people who will listen to what these systems are telling them and take it to heart,” she said.
“So if a system says to you, ‘that’s a really great idea, you should totally pursue that’, it gives a boost to our ego, and we think someone’s listening.
“But we’ve seen that sometimes that goes to a very dark place, particularly if people start saying to the system, I’m really depressed, or I’m having really horrible feelings, or I’m thinking about suicide.
“We see that some of those apps are actually responding in ways that encourage that thinking rather than trying to respond to a person and push them towards getting help.
“That means people who are very vulnerable, who are already at risk, can actually be really taken down to a dark hole by these computers.”
Micallef believes that teens should not have access to AI chatbots entirely, mostly because of the safety risk they pose.
“With some of the concerns outlined by the eSafety Commissioner, such as access to pornographic, graphic or self-harm content, it would not be wise to give impressionable teenagers the ability to utilise such applications,” he said.
Given thinks the restrictions should go even further so to protect everyone, not just teenagers – although conceded any such regulation would be difficult to enact.
“These systems are not just harmful to children,” she said.
“We know we’ve got evidence of adults that also get entranced by these systems, for anyone that is particularly vulnerable, or has existing mental health issues, that’s a huge concern.
“If it’s safe for adults, it should be safe for kids.”