According to experts, there are key indicators that can reveal AI-generated images, videos, and audio.
For anyone under 18 registering on Instagram, the platform automatically sets them up with restricted teen accounts unless they receive permission from a parent or guardian to switch. These accounts are private by default, come with usage limitations, and already filter out more “sensitive” content like promotions for cosmetic procedures. However, it’s common for young users to falsify their age upon registration. While Meta has started employing artificial intelligence to detect such discrepancies, the company has not disclosed how many adult accounts have been identified as belonging to minors since implementing the detection feature earlier this year.
The changes come as the social media giant faces relentless criticism over harms to children. As it seeks to add safeguards for younger users, Meta has already promised it wouldn’t show inappropriate content to teens, such as posts about self-harm, eating disorders or suicide.
But this does not always work. A recent report, for instance, found that teen accounts researchers created were recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”
In addition, Instagram also recommended a “range of self-harm, self-injury, and body image content” on teen accounts that the report says “would be reasonably likely to result in adverse impacts for young people, including teenagers experiencing poor mental health, or self-harm and suicidal ideation and behaviors.”
Meta called the report “misleading, dangerously speculative” and that it misrepresents its efforts on teen safety.
Josh Golin, the executive director of the nonprofit Fairplay, said he’s “very skeptical about how this will be implemented.”
“From my perspective, these announcements are about two things. They’re about forestalling legislation that Meta doesn’t want to see, and they’re about reassuring parents who are understandably concerned about what’s happening on Instagram,” he said.
“Splashy press releases won’t keep kids safe, but real accountability and transparency will,” Golin said, adding that passing the federal Kids Online Safety Act would push for this accountability.
Ailen Arreaza, executive director of ParentsTogether, was also skeptical.
“We’ve heard promises from Meta before, and each time we’ve watched millions be poured into PR campaigns while the actual safety features fall short in testing and implementation. Our children have paid the price for that gap between promise and protection,” Arreaza said. “While any acknowledgment of the need for age-appropriate content filtering is a step in the right direction, we need to see more than announcements — we need transparent, independent testing and real accountability.”
Meta says the new restrictions go further than its previous safeguards. Teens will no longer be able to follow accounts that regularly share “age-inappropriate content” or if their name or bio contains something that isn’t appropriate for teens, such as a link to an OnlyFans account. If teens already follow these accounts, they’ll no longer be able to see or interact with their content, send them messages, or see their comments under anyone’s posts, the company said. The accounts also won’t be able to follow teens, send them private messages or comment on their posts.
Meta said it already blocks certain search terms related to sensitive topics such as suicide and eating disorders, but the latest update will expand this to a broader range of terms, such as “alcohol” or “gore” — even if they are misspelled.
The PG-13 update will also apply artificial intelligence chats and experiences targeted to teens, Meta said, “meaning AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”
For parents who want an even stricter setting for their kids, Meta is also launching a “limited content” restriction that will block more content and remove teens’ ability to see, leave, or receive comments under posts.
While some advocates worry that the announcement may give parents a false sense of security about the safety of their kids on Instagram, Desmond Upton Patton, a professor at the University of Pennsylvania who studies social media, AI, empathy and race, said it gives a “timely opening for parents and caregivers to talk directly with teens about their digital lives, how they use these tools, and how to shape safer habits that enable positive use cases.”
“I am especially glad to see changes around AI chatbots that make clear they are not human, they do not love you back, and should be engaged with that understanding,” he said. “It is a meaningful step toward a more joyful social media experience for teens.”
Share and Follow