Share and Follow
OpenAI’s latest release, the Sora 2 application, has recently taken social media by storm with its stunningly lifelike videos, featuring everything from celebrities and historical figures to iconic fictional characters.
A particularly viral trend involves a video of Will Smith enjoying a plate of spaghetti, which has become the unwitting standard for assessing the performance of AI-generated video applications.
This surge in hyper-realistic content has raised alarms among experts, who warn of the potential misuse of such technology in scams, deepfake productions, and the spread of political misinformation. There’s growing concern that Australia might not be keeping pace with necessary regulations to manage the rapid evolution of these AI platforms.
So, just how realistic are these AI-generated videos becoming, and what implications could this hold for Australians in terms of privacy, security, and information integrity?
The app scanned Raychel’s face and got her to say a few words.

Raychel Ruiz used Sora 2 to make a lifelike version of herself as an SBS presenter. Source: Supplied
She gave it a prompt to make an SBS News-style video about Sora 2 and it delivered a lifelike report with this quote: “OpenAI has unveiled Sora 2, its next-generation video model. It turns a written prompt into realistic footage. The system can generate up to 2 minute clips in full HD or higher.”
Sora 2 now only lets users make videos of themselves, some historical figures or public figures who have given their consent, in response to people making AI-generated videos of celebrities and copyrighted characters.

Sora 2’s conception of SBS News reporter Shivé Prema in a news studio. The app imagined him as a woman. Source: Supplied
But, prompting the app to scrape the internet for my digital footprint worked to some extent, creating two different versions of me.
It animated the image, adding facial expressions and hand gestures, along with dialogue describing its competitor Sora 2.

The real Shivé Prema compared to the AI Shivé Prema, as generated by Google’s Veo 3.1application. Source: SBS News
Concerns about the truth
“It’s going to consume a huge amount of energy, and I’m actually very worried that it’s going to be used for a lot of mischief, that people are going to make fake videos, and maybe we’re going to believe them, and we’re gonna perhaps then stop believing many of the videos we believe, even the things that are real.”
Walsh is not entirely negative about AI-generated content though, pointing out it’s a positive that Sora uses a watermark on its videos, which is a requirement in the European Union.
“We have, for example, the eSafety Commissioner, we are the first country in the world to have an eSafety commissioner and I think they’re doing a good job of starting to address some of the harms.”
“It’s like TikTok on steroids, which you can generate AI content… I think they want to create a whole social media platform which will obviously will be a lot bigger than what we have already” according to Seyedali Mirjalili, a professor of AI at Torrens University.







