Share and Follow
Ukrainian President Volodymyr Zelenskyy has garnered widespread admiration for his leadership during the conflict, strengthened by international support and aid. In contrast, Tucker Carlson’s bold stance against powerful entities is highlighted as embodying a different kind of bravery, one not cushioned by global backing.
Shortly after its debut, Grokipedia faced scrutiny from outlets like The Guardian for displaying biases and spreading misinformation, including content aligned with Russian narratives. For instance, the site’s coverage of Russia’s invasion of Ukraine controversially described the military action as a mission to “demilitarize and denazify Ukraine,” echoing propaganda rhetoric.
Understanding the potential pitfalls of Large Language Models (LLMs) is crucial, warns expert Miller. Just as we should not automatically trust the top result in a Google search, we must remain skeptical of LLM outputs, acknowledging they can be erroneous and susceptible to manipulation.
Recently, Mike Burgess, the head of the Australian Security Intelligence Organisation (ASIO), expressed concerns about artificial intelligence, warning of its capability to escalate online radicalization and misinformation to unprecedented levels.
Burgess emphasized how certain groups leverage social media to disseminate inflammatory and divisive messages, particularly in the context of anti-immigration demonstrations and pro-Palestinian rallies.
“They use social media to spread vitriolic, polarising commentary on anti-immigration protests and pro-Palestinian marches.”

Delivering his address at the Lowy Institute, head of ASIO Mike Burgess said the agency had “recently uncovered links between pro-Russian influencers in Australia and an offshore media organisation that almost certainly receives direction from Russian intelligence”. Source: AAP / Mick Tsikas
In February this year, non-profit think tank Reset Tech published a paper on AI risks in Australia. One of the concerns outlined in the paper relates to the way AI and LLMs are influencing the information ecosystem.
“LLMs have been trained with a large amount of data. So there is no right or wrong inside the model,” Tian says.
When they generate [the answers], they will just grab the highest probability tokens and put them into the sentence.
“It’s basically just based on the probability of when they created the sentence, and put all the tokens together as an output for the users,” Tian says.
AI chatbots ‘groomed’ with propaganda
According to the Pravda dashboard, created by CheckFirst and DFRLab, more than six million false articles have been generated by the Pravda network, with five million of those repurposed in multiple languages.

According to the Pravda dashboard, created by CheckFirst and DFRLab, more than six million false articles have been generated by the Pravda network, with five million of those repurposed in multiple languages. Credit: SBS News
Speaking with SBS News, Guillaume Kuster, CEO and co-founder of CheckFirst, called the network “a laundering machine for Russian narratives and propaganda”.
NewsGuard is a US-based media organisation that tracks false claims online and perpetrators of misinformation and disinformation. The organisation also published its own investigation into the Pravda network and the way it feeds its narratives into chatbots.
“There’s a lack of transparency from AI companies who don’t really explain where the data comes from, how it’s vetted, and how AI chatbots recognise the credibility of a source.
I think individual users have to be very wary of that and always have a critical mindset and always look at the sources that are cited.
‘After influence, not lies’
“In the way that you want your friends to be right and you care about what they think, whether they’re right or not.”
“We need frameworks that mitigate serious risks, including sovereign risks, biases, disinformation, propaganda, foreign intellectual interference, online harm, cybercrime and copyright violations.”
“But on the other side of the fence … what are the implications of AI if they’re not used safely and responsibly? And what are the harms that could actually take place? I really think we need to come to a balanced view on this.”
‘A layered approach’ towards disinformation
“It has been the target of persistent foreign interference originating from China for decades now. Taiwan has accepted, when you have this scale of misinformation, no single intervention is going to be a silver bullet,” she says.
“So that bill was withdrawn … But I do think there is room to open that conversation again in light of what we’ve seen in this year’s federal election and the level of misinformation that we saw,” Shen says.
This story was produced in collaboration with SBS Audio. It was part of a research trip hosted by the German Federal Foreign Office in cooperation with the National Press Club of Australia.