HomeNewsChatGPT Accused of Ignoring Tumbler Ridge Mass Shooting Threat: Lawsuit Unveils Shocking...

ChatGPT Accused of Ignoring Tumbler Ridge Mass Shooting Threat: Lawsuit Unveils Shocking Allegations

Share and Follow


The family of a girl gravely injured during a mass shooting in British Columbia, Canada, has initiated legal action against OpenAI. They argue that ChatGPT, a product of the artificial intelligence company, had knowledge of the 18-year-old transgender shooter’s intentions yet failed to alert law enforcement.

The assailant, identified as Jesse Van Rootselaar, carried out a tragic attack in February, taking the lives of his mother and 11-year-old brother before targeting Tumbler Ridge Secondary School. Dressed in female attire, he ultimately claimed the lives of eight individuals, six of whom were children. As authorities closed in, he took his own life.

This raises the question: Could the tragedy have been prevented if ChatGPT had disclosed what it allegedly “knew?”

MORE: Father of Transgender Canadian Mass Shooter Speaks on His ‘Son’ and the Heartbreak He Caused

New: Canadian Mass Shooter Identified as Transgender, Authorities Strive to Avoid ‘Misgendering’

An initial ChatGPT account linked to the suspect, 18‑year‑old Jesse Van Rootselaar, was banned by OpenAI in June 2025 due to the nature of her conversations with the chatbot, but Canadian police were not notified.

OpenAI told the BBC it was committed to making “meaningful changes” to help prevent similar tragedies in the future.


MORE: Father of Transgender Canadian Mass Shooter Talks About His ‘Son’ and the ‘Heartbreak’ He Caused

New: Canadian Mass Shooter Identified As Transgender, Authorities Rush to Not ‘Misgender’






The story is chilling. Artificial intelligence is rapidly changing the world, for the better and for the worse, but one thing it does not have is feelings. If it did, it presumably would have done anything in its power to stop the bloodshed:

The civil lawsuit, brought by Gebala’s mother Cia Edmonds, alleges Rootselaar set up an account with ChatGPT before she turned 18 – something users can do with parental consent.

The plaintiffs allege no age verification took place on the site.

The lawsuit claims the suspect saw the chatbot as a “trusted confidante” and described “various scenarios involving gun violence” to it over several days in late spring or early summer 2025.

Twelve OpenAI employees then reportedly flagged the posts as “indicating an imminent risk of serious harm to others” and recommended Canadian law enforcement was informed, the lawsuit alleges.

Instead, it is alleged the request to contact the authorities was “rebuffed” and the only action taken was to ban Rootselaar’s account.

Her family posted video updates: ‘Still fighting, still with us.’

WSJ: OpenAI flagged sh**ter’s ChatGPT for gun violence scenarios in June 2025; staff debated but didn’t alert authorities. Account banned.

Prayers for Maya & the community.





Although the company banned Rootselaar’s account for violent content, he was able to open another one. Meanwhile, OpenAI said they did not notify the police because they saw nothing that met “its threshold of a credible or imminent plan for serious physical harm to others.” 

The lawsuit says otherwise, and alleges that they had “had specific knowledge of the shooter’s long-range planning of a mass casualty event,” but “took no steps to act upon this knowledge”.

It’s a horrible story, and our hearts go out to Maya, her family, and all of those who were left heartbroken. Even if the lawsuit is successful, it will not bring back the children or undo the damage, but hopefully, it will put AI companies on notice that they need to be on high alert for extremism and psychosis among their users. 


Editor’s Note: Do you enjoy RedState’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.



Share and Follow