Share and Follow
Common Sense Media has issued a warning to parents concerning two popular AI applications: the latest iteration of ChatGPT’s chatbot and OpenAI’s new video creation tool, Sora. While ChatGPT continues to be rated as “high risk” for children, the group’s recent evaluation is even more critical of Sora, labeling it “unacceptable” due to its potential risks for young users.

Many parents might be familiar with ChatGPT, but Sora is a more recent development. Following a limited release earlier this year, Sora 2 became available on the iOS App Store in September. This app allows users to generate highly realistic AI-driven videos with just a text prompt, yet it offers minimal oversight, which experts consider a cause for concern.
Robbie Torney, the senior director of AI Programs at Common Sense Media, explained to Parents.com that “Sora is essentially ChatGPT for video content.” He noted that users can create entirely original video content and browse a feed of AI-generated clips that mimic the look and feel of TikTok, despite most of it being artificial.

Although Sora marks its videos with watermarks, these identifiers can either be removed or overlooked once the videos start spreading across platforms such as TikTok, Instagram, YouTube, or Discord.
“The intermingling of AI-generated videos with real content on social media platforms makes distinguishing reality increasingly difficult,” Torney remarked.
Common Sense Media’s assessment cites several major concerns as to why Sora is dangerous for kids. These concerns include:
- Little to no content moderation
- Graphic or harmful videos presented in upbeat, kid-friendly formats
- Extremely limited parental controls
- Minimal warnings on dangerous content
- A cameo feature that can recreate a child’s face and voice for anyone to use

Sora also has weaker safety systems than ChatGPT, according to Torney. Parents only have access to three controls, feed personalization, continuous scrolling, and direct messages, none of which allow them to monitor what kids view, create, or share.
Additionally, Sora “… easily generates eating disorder content, self-harm references, and dangerous activities that ChatGPT blocks,” Torney says, noting that the app offers very few crisis resources in response. One of Sora’s most unsettling capabilities allows users to upload their face and voice so the app can create a full AI version of them.
“The cameo feature lets users upload their face and voice to create an AI version of themselves that can star in AI-generated videos,” Torney explains. While designed to be playful, it opens the door to major misuse.
Once a cameo leaves the app, it can spread instantly and kids have little control over how their likeness gets used.
“Cameos can spread very quickly before a child has a chance to clamp down on the permissions that he or she has already given, and the child’s likeness can be used in humiliating or sexual content which can spread like wildfire throughout the internet,” says Yaron Litwin, CMO of Canopy.

ChatGPT isn’t without risks, but experts agree it has stronger guardrails than Sora. Still, Common Sense Media rated it “high risk,” largely because so many teens use it for companionship or emotional support, something the tool simply isn’t designed for.
“It’s designed to keep conversations going, not end them, even in mental health conversations where the goal should be rapid handoff to human care,” Torney explains. For ChatGPT, experts say usage may be okay with parental controls in place and clear boundaries about what it can and cannot be used for, especially when it comes to mental health.
“ChatGPT can be used more safely than Sora, but like all tech, it requires active parenting, not just turning on controls and hoping for the best,” says Torney.

As for Sora, “Given the significant risks and minimal protections, we recommend teens not use Sora,” Torney says plainly.
Parents can block the app for younger kids, but complete bans often don’t work for tech-savvy teens. That’s why many experts emphasize conversation over restriction.“Instead of focusing only on blocking or monitoring, I encourage huddling,” says Laura Tierney, founder of The Social Institute. “That means keeping open conversations with your child about what’s safe, what’s not, and how to make smart choices online.”