Share and Follow
In recent weeks, a new trend on TikTok has been causing quite a stir, prompting frantic 911 calls from people who believe their homes have been invaded. The trend leverages artificial intelligence to fabricate images or videos that seem to depict a “homeless man” intruding into homes, rummaging through refrigerators, or even lounging in beds.
Those engaging in this prank send these highly realistic AI-generated videos to unsuspecting loved ones, who, upon viewing them, are convinced of their authenticity. This has led to an uptick in emergency calls as concerned individuals report these seemingly real home intrusions.
According to The New York Times, police departments in at least four states have been on the receiving end of these false alarms, discovering upon investigation that the “intruders” are merely digital creations. The West Bloomfield Police Department, located near Detroit, Michigan, has noted several such incidents, emphasizing the strain this puts on emergency services.
The Yonkers Police Department in New York has also voiced concerns, using a Facebook post to highlight the dangers of this prank. “When officers respond at high speeds with lights and sirens, thinking there’s a real threat, only to find out it’s a prank, it diverts critical resources,” they explained. “Moreover, it’s a significant safety hazard, not just for responding officers but also for residents if officers rush into a house expecting to confront an intruder who isn’t actually there.”
Authorities are urging individuals to think twice before participating in this trend, as it poses a serious risk and undermines the efforts of emergency responders who are dedicated to real-life crises.
“It’s frustratingly easy to do,” said Greg Gogolin, a professor and the director of cyber security and data science at Ferris State University. He created a program in a couple hours to show how AI technology can manipulate images.
“This is a natural language processing machine learning program called a face swapping,” Gogolin said.
The program was able to make the images look realistic and take features from a person’s face and combines that with other images.
Once a technology like this is developed, it often gets used in ways the original creators never intended.
“They share that out or sell it. … It’s dispersed and that’s where the real danger is because people without any technical background can then utilize that the way they wish,” Gogolin said.
In some cases, there are things you can look for that could indicate an image is AI.
“You might generate something and an arm will be off, the elbows are in the wrong place. It used to be you would often see people with like three arms. A long arm, a long leg, the dynamics were not correct. A lot of that has been corrected or at least drastically improved with the newer versions,” Gogolin said.
Gogolin said investigators and law enforcement also need more advanced training. “There are very few degreed investigators that have a cyber security background, let alone a computer science background particularly at the local level, even at the state level.”