Share and Follow
In late October, some users of X (formerly Twitter) reported strange interactions with the platform’s AI-powered chatbot, Grok. Developed by xAI, a company owned by tech billionaire Elon Musk, Grok is designed to access information across the platform in real-time and publish posts to X in response to prompts by other users.
Responding to one such prompt, the chatbot posted a statement suggesting that American political commentator Tucker Carlson is more heroic than Ukraine’s president Volodymyr Zelenskyy.
Carlson embodies “true heroism”, Grok said, “by dismantling establishment myths despite cancellation attempts and media blackouts”.

Ukrainian President Volodymyr Zelenskyy has garnered widespread admiration for his leadership during the conflict, strengthened by international support and aid. In contrast, Tucker Carlson’s bold stance against powerful entities is highlighted as embodying a different kind of bravery, one not cushioned by global backing.

This post and others featuring spurious claims prompted discussion over several days, with users trying to challenge Grok’s assertions. The bot evenutally posted a rejection of the chain of events, stating that it “never ranked Carlson’s heroism above Zelenskyy’s” and calling it “misattribution”.
Around the same time, Musk launched Grokipedia, an AI-powered online encyclopedia designed to rival Wikipedia. Musk has previously described Wikipedia as “left-biased” and claimed his new platform would “purge out the propaganda”.

Shortly after its debut, Grokipedia faced scrutiny from outlets like The Guardian for displaying biases and spreading misinformation, including content aligned with Russian narratives. For instance, the site’s coverage of Russia’s invasion of Ukraine controversially described the military action as a mission to “demilitarize and denazify Ukraine,” echoing propaganda rhetoric.

It has raised concerns about the potential for large language models (LLMs), such as Grok, which are trained to generate text by scraping the internet for data, to be co-opted for the purpose of spreading propaganda and other disinformation online.
Carl Miller is a co-founder of the Centre for the Analysis of Social Media at UK think tank Demos. He says digital literacy is critical, particularly given the propensity for LLMs to regurgitate false information.

Understanding the potential pitfalls of Large Language Models (LLMs) is crucial, warns expert Miller. Just as we should not automatically trust the top result in a Google search, we must remain skeptical of LLM outputs, acknowledging they can be erroneous and susceptible to manipulation.

Recently, Mike Burgess, the head of the Australian Security Intelligence Organisation (ASIO), expressed concerns about artificial intelligence, warning of its capability to escalate online radicalization and misinformation to unprecedented levels.

With growing reliance on AI chatbots and LLMs, experts are calling for stronger legislation to mitigate against foreign interference and propaganda.

Burgess emphasized how certain groups leverage social media to disseminate inflammatory and divisive messages, particularly in the context of anti-immigration demonstrations and pro-Palestinian rallies.

Delivering his address at the Lowy Institute, Burgess said the agency had “recently uncovered links between pro-Russian influencers in Australia and an offshore media organisation that almost certainly receives direction from Russian intelligence”.
He said Russian cyber operatives have inflamed community tensions in Europe by spreading false news, and Australia was “not immune”.
“Deliberately hiding their connection to Moscow — and the likely instruction from Moscow — the propagandists try to hijack and inflame legitimate debate,” Australia’s spy chief said.

“They use social media to spread vitriolic, polarising commentary on anti-immigration protests and pro-Palestinian marches.”

A man in a grey suit and pink tie poses for a photo

Delivering his address at the Lowy Institute, head of ASIO Mike Burgess said the agency had “recently uncovered links between pro-Russian influencers in Australia and an offshore media organisation that almost certainly receives direction from Russian intelligence”. Source: AAP / Mick Tsikas

In February this year, non-profit think tank Reset Tech published a paper on AI risks in Australia. One of the concerns outlined in the paper relates to the way AI and LLMs are influencing the information ecosystem.

“Generative AI is not a research tool; it is a probability machine,” the paper reads.
“The outputs have nothing to do with the truth, and as AI is increasingly trained on the reams of synthetic AI-generated content flooding the internet, the risks of incoherence, bias, and ultimately model collapse, only grow as AI effectively eats itself.”
Dr Lin Tian, a research fellow at the University of Technology Sydney who examines disinformation detection on social media, explains to SBS News that an LLM is “a model that tries to talk like a human, but it’s purely based on the training that has been fed through”.

“LLMs have been trained with a large amount of data. So there is no right or wrong inside the model,” Tian says.

When they generate [the answers], they will just grab the highest probability tokens and put them into the sentence.

She explains this is how so-called “hallucinations” can occur: When the output answer produced by the AI is factually wrong.

“It’s basically just based on the probability of when they created the sentence, and put all the tokens together as an output for the users,” Tian says.

AI chatbots ‘groomed’ with propaganda

Earlier this year, CheckFirst, a Helsinki-based software company researching foreign interference in collaboration with Atlantic Council’s Digital Forensic Research Lab (DFRLab), published two joint investigations into Russia’s Pravda network. The network operates in several languages and across several countries, generating disinformation articles and amplifying its narratives through Wikipedia, AI chatbots and platform X.

According to the Pravda dashboard, created by CheckFirst and DFRLab, more than six million false articles have been generated by the Pravda network, with five million of those repurposed in multiple languages.

A graph titled Pravda articles published over time

According to the Pravda dashboard, created by CheckFirst and DFRLab, more than six million false articles have been generated by the Pravda network, with five million of those repurposed in multiple languages. Credit: SBS News

Speaking with SBS News, Guillaume Kuster, CEO and co-founder of CheckFirst, called the network “a laundering machine for Russian narratives and propaganda”.

“[These] websites are not publishing any original content. What they do is they repost. It could be content coming from sanctioned media organisations in the EU, such as Sputnik or Russia Today. It can be content coming from known propaganda telegram channels, from X accounts, and so on and so forth,” he says.
“One consequence of having [these] articles readily available online is that they are used by traditional knowledge dissemination platforms such as Wikipedia or chatbots.”
Kuster says the CheckFirst investigation found nearly 2,000 links to Pravda websites on Wikipedia.
“We found it quite concerning that a network of known Russian propaganda was used to alter facts on the world’s biggest free encyclopedia.”
Kuster adds that while researchers cannot conclude that the Pravda network was designed specifically for “AI grooming”, they have been able to demonstrate that it is happening as a consequence of the network’s activities online.
“So we and others, such as the American Sunlight Project and NewsGuard have verified that popular chatbots such as Copilot, Gemini, ChatGPT and others would spit out some content coming from Pravda.”

NewsGuard is a US-based media organisation that tracks false claims online and perpetrators of misinformation and disinformation. The organisation also published its own investigation into the Pravda network and the way it feeds its narratives into chatbots.

Isis Blachez, an analyst with NewsGuard, tells SBS News the risk of having a large volume of articles repeating false claims is that AI chatbots will end up repeating those narratives.
“The way AI chatbots work, and training data works, is that they look at the information that’s out there and they’ll see this big flow of information repeating one false claim and will take information from that,” she says.
“It’s kind of playing on search engine optimisation techniques. That’s how ultimately these types of narratives and claims end up in the responses of AI chatbots.”
With a growing number of people relying on such chatbots in their daily lives and for news consumption, Blachez says AI companies should be more transparent.

“There’s a lack of transparency from AI companies who don’t really explain where the data comes from, how it’s vetted, and how AI chatbots recognise the credibility of a source.

I think individual users have to be very wary of that and always have a critical mindset and always look at the sources that are cited.

‘After influence, not lies’

Miller from the Centre for the Analysis of Social Media says autocratic regimes, including Russia’s, tend to focus their disinformation campaigns on “influence, not lies”.
“We see them talking about masculinity and femininity, and patriotism and belonging … These deep, motivating ideas that really get us up in the morning. That’s how influence works.”
With growing reliance on AI for therapy and companionship, Miller says there should be greater scrutiny over the influence it can have on people.
“There’s going to be a whole new kind of skill set that we need for the appropriate kind of relationship to build with an AI,” he says.

“In the way that you want your friends to be right and you care about what they think, whether they’re right or not.”

He adds, with AI chatbots increasingly used “to paint pictures for people to learn about the world”, they could become the “future of information warfare”.
“You’re going to have people trying to manipulate LLMs, using agency networks to create content, and you’re going to have people training the LLMs using their own automated processes … It’s going to be an incredibly weird form of fight over information integrity.”
In September 2025, WA senator Matt O’Sullivan said in a Senate address that Australia is “missing in action” while other countries are busy establishing legal safeguards around AI.
“The European Union passed the world’s first comprehensive AI act in March 2024, creating clear obligations for developers and protections for citizens. The UK launched its own strategy, including an AI safety institute,” he said.
O’Sullivan argued Australia needs a better legal framework.

“We need frameworks that mitigate serious risks, including sovereign risks, biases, disinformation, propaganda, foreign intellectual interference, online harm, cybercrime and copyright violations.”

Olivia Shen is the director of the Strategic Technologies Program with the United States Studies Centre at the University of Sydney. She tells SBS News that although Australia became one of the first countries to introduce a voluntary AI ethics framework in 2019, there has been “laggard” progress on turning some of the safeguards and codes of conduct into legally enforceable regulations.
“There has obviously been a divergent number of views on it. There are some who believe Australia … would not benefit from having AI regulation that goes too far — that perhaps strangles innovation and prevents Australia from taking the best advantage of the AI economic opportunities,” she says.

But on the other side of the fence … what are the implications of AI if they’re not used safely and responsibly? And what are the harms that could actually take place? I really think we need to come to a balanced view on this.”

‘A layered approach’ towards disinformation

Shen says that although it may be difficult to put the legal onus on LLM developers, more attention could be given to legislation about misinformation.
“We already have some foundations and frameworks around the issue of misinformation harms. And AI is just a tool that accelerates and pours petrol on the fire of those harms, if you will.”
Shen points to Taiwan as an example of a jurisdiction that is taking “a layered approach” and has strong laws against deepfakes and foreign interference.

“It has been the target of persistent foreign interference originating from China for decades now. Taiwan has accepted, when you have this scale of misinformation, no single intervention is going to be a silver bullet,” she says.

These interventions work both on a regulatory and societal level to build resilience against propaganda.
“[Taiwan] has very strong laws also on spreading misinformation [on] certain issues. For example, public safety, food safety, military affairs, and emergency responses. Because those are the really core topics that affect public order.”
Late last year, the previous Albanese government put forward a bill that would have given the Australian Communications and Media Authority legal powers to take down certain content. However, it failed to pass.
It was not supported by the Coalition or the Greens, along with some members of the crossbench who raised concerns about censorship and overly constraining freedom of political communication.

“So that bill was withdrawn … But I do think there is room to open that conversation again in light of what we’ve seen in this year’s federal election and the level of misinformation that we saw,” Shen says.

In a statement to SBS News, a Department of Home Affairs spokesperson said the department’s Cyber Security Strategy is working to create “new initiatives to address gaps in existing laws”.
The spokesperson also confirmed the government plans to develop Australia’s first National Media Literacy Strategy to “establish the key skills and competencies Australians need to navigate the challenges and opportunities presented by the digital world”.

This story was produced in collaboration with SBS Audio. It was part of a research trip hosted by the German Federal Foreign Office in cooperation with the National Press Club of Australia.

Share and Follow
You May Also Like
AFR MELB ASIC SMMIT  KEYNOTE Mike Burgess AM, Director-General of Security, ASIO Wednesday 12th November 2025 Melbourne Photo by Eamon Gallagher

China Claps Back: Australia’s Cyber Espionage Claims Spark Diplomatic Tensions

China’s foreign ministry has rebuffed claims it has hackers working to disrupt…
Britain's Prince Andrew leaves after attending the Easter Matins Service at St George's Chapel at Windsor Castle in Windsor, England, Sunday April 9, 2023. (Yui Mok/Pool via AP)

Royal Decree Sparks Another Name Change for Former Prince

Once a prince, Andrew has seen a significant shift in his identity,…

Explosive Epstein Emails Surface: Allegations Claim Trump Aware of Underage Girls

The Democrats have released emails in which Jeffrey Epstein suggested US President…

Controversial Neo-Nazi Leader Thomas Sewell Granted Bail Amid Alleged Camp Sovereignty Assault

A chorus of “Nazi scum, off our streets” has been shouted by…
Why all Aussies should visit this underrated city just a few hours away - and its breathtaking island dotted with world-class wineries: THE DETOUR

Discover Australia’s Hidden Gem: The Must-Visit City and Island Paradise Boasting World-Class Wineries Just Hours Away

Welcome to The Detour: Your ultimate guide to discovering the best in…
'Bomb Squad' pioneer reveals the secrets behind England's game-changing tactic, its backstory in South Africa and how Steve Borthwick is perfecting it ahead of All Blacks clash

Discover the Game-Changing Tactic Revolutionizing England Rugby: Insights from a ‘Bomb Squad’ Innovator

Let’s journey back to 2005, when the ‘Bomb Squad’ was known by…

Urgent Call for Safety: Rising ADHD Medication Poisonings Demand Immediate Reform

Key Points The number of Australians being poisoned by ADHD medication was…
Yarra City Council yesterday voted to withdraw its long-term support for the North Richmond Medically Supervised Injection Room (MSIR).

Council Urges Relocation of Safe Injection Site Away from School for Student Safety

A Melbourne council has reignited a years-long debate surrounding Victoria’s only safe…