Share and Follow
To this end, the government has been funding an age assurance technology trial and today some preliminary findings have been released to show us if it can be done.
In publishing some preliminary findings, the team behind the trial have claimed clearly that “age assurance can be done in Australia and can be private, robust and effective”.
This is a win for the government, which would be hoping for a clear path forward for social media apps to verify the ages of users looking to create accounts.
“The preliminary findings indicate that there are no significant technological barriers preventing the deployment of effective age assurance systems in Australia,” project director Tony Allen said.
“These solutions are technically feasible, can be integrated flexibly into existing services and can support the safety and rights of children online.”
The vast majority of the companies which made submissions to the trial operate a facial verification system or similar.
There’s some using waving hands, others that have systems that combine multiple options from facial verification to ID verification or parental/guardian authorisation.
Critically, this trial doesn’t set out to directly inform the government on the social media ban for kids under 16, so that isn’t widely documented in the report, but the reality is that’s almost the entire purpose of this trial and its failure to address the core principles of that ban are problematic for the age assurance trial, and the government going forward.
For example, one submission to the age assurance technology trial reports that “our internal results suggest that we have +99 per cent accuracy to estimate 13-16 year olds as below 21 years of age, and +99 per cent accuracy to estimate 0-12 year olds as below 16 years of age.”
This means that their system can guarantee that a 12-year-old is under 16 but can only guarantee that a 15-year-old is under 21.
Another company has exhaustive documentation online about their systems, and states clearly that the “mean absolute errors” or accuracy rates are 1.3 years for 13-to-17-year-olds.
That means, in simple terms, that a 16-year-old may be identified as being as old as 17 or as young as 15.
Perhaps critically, our policymakers and those testing this technology should put themselves in the shoes of both sides of the upcoming battle – kids who want access to social media but are not allowed it by law and social media companies who are required to restrict access to those under 16.
If a child is 15 years and eight months old, how can TikTok work that out?
There’s no evidence that any of the verification methods being tested can discern the difference between a child of that age and a 16-year-old.
And when that child turns 16, literally on their birthday, they are going to want access.
It’s required by law that they cannot be forced to use a government ID and that social media apps must have another method of verification.
These trials, and the submissions to it, don’t seem to offer any certain method for that – not one that will help a 16-year-old on their birthday, that’s for sure.
There are a couple of providers as part of the trial that look to parental verification or consent systems.
These would require the parent to pass the age test or use ID to then enable the child’s account.
Interestingly, the report suggests limited evidence that these systems could cope with how children’s capacities evolve, or that they enhance the rights of kids to participate.
What’s next for the social media ban?
These trials will continue, with likely some third-party provider authorisations or endorsements to come – allowing social media companies to then implement their systems into their account creation platforms.
Time is running out though.
It’s around six months until the ban comes into effect and we’re still a long way from any certainty around this process, and the finalisation of the guidelines around it.
For example, is YouTube banned? Its Shorts platform is identical to TikTok and Instagram Reels and is as high-risk for kids as any other part of the internet.
Additionally, what is the government doing to protect kids from Twitch streamers with inappropriate content, or kids streaming on Twitch being targeted by inappropriate comments and users?
Has the government even heard of Discord and do they know the risks it poses to kids in a far greater way than even some of the named social media platforms thus far?
The core principle we need to solve before the ban
How can an app tell the difference between a child who is 15 years and 10 months old, and a child who is 16 years old?
Until someone answers that question, the problem exists only on the shoulders of parents, and social media apps can’t be expected to somehow make that delineation on a user-by-user basis.