Share and Follow
Access to three leading deepfake websites has been restricted for Australians after these platforms were found to be used in generating AI-manipulated images of schoolchildren.
The UK-based company, Itai Tech, faced limitations on its website access following a warning from the eSafety Commission. The Commission threatened legal action with potential fines of $49.5 million due to the company’s non-compliance with essential regulations and standards.
Itai Tech was informed that its platforms had been implicated in significant cases where Australian schoolchildren created deepfake pornographic images of their classmates.
These websites enable users to upload photos of real individuals, including minors, transforming them to appear undressed or in various suggestive scenarios such as dressed as schoolgirls, in lingerie, or in BDSM attire.
Renowned globally, Itai Tech’s services are particularly popular, with the eSafety Commission estimating around 100,000 monthly visits from Australia alone.
“We know ‘nudify’ services have been used to devastating effect in Australian schools, and with this major provider blocking their use by Australians we believe it will have a tangible impact on the number of Australian school children falling victim to AI-generated child sexual exploitation,” Commissioner Inman Grant said.
”We took enforcement action in September because this provider failed to put in safeguards to prevent its services being used to create child sexual exploitation material and were even marketing features like undressing ‘any girl,’ and with options for ‘schoolgirl’ image generation and features such as ‘sex mode’.”
Itai Tech also blocked UK users from its website after it was fined £50,000 ($101,000) earlier this month for not having age checks.
Global AI model hosting platform Hugging Face has also changed its terms of service after some models were misused to create deepfake sexual exploitation material of real children and survivors of sexual abuse.
Following a warning from the eSafety Commission, the platform has instructed all account holders to take steps to minimise the risks associated with the models, specifically to prevent the generation of child sexual exploitation or pro-terror material.
eSafety said it was targeting the AI consumer tools, as well as the underlying models that power them and the platforms that host them.