By Lisa Given*
The South Australian government is moving ahead with plans to ban children under 14 from social media. Under the proposal, teens aged 14 and 15 would also need parental consent to have social media accounts.
South Australian Premier Peter Malinauskas has flagged the consequences for social media companies that don’t comply with the new rules would be “severe and harsh”.
He discussed the proposal with other state premiers and Prime Minister Anthony Albanese at a cabinet meeting last week, and is encouraging national adoption of the proposed strategy.
Malinauskas has attributed the decision to “mounting evidence” of the “adverse impact” of social media on young people. This comes despite a lack of consensus among experts, with some researchers explaining there is “not a strong evidence base” of the harms social media pose to young people.
Similar laws exist elsewhere
South Australia’s move follows similar laws introduced elsewhere. In the United States, both Florida and Texas have passed similar legislation.
Like South Australia’s proposal, Florida banned children under 14 from social media, requiring parental consent for 14- and 15-year-olds.
In Texas, all teens under 18 now need parental consent to create social media accounts. This is not without controversy, with one commentator describing this as a “misguided attempt to make the internet ‘safe’”, while introducing a law that “infringes on the rights of all Texans”.
In Spain, the minimum age for setting up a social media account increased earlier this year from 14 to 16. Technology companies were also required to install age verification and parental controls on social media and video-sharing platforms.
When South Australia first proposed its ban in May, comments from the community were swift and polarised. At the time, I examined the limitations and potential problems with the technical solutions being proposed for such a ban, including privacy concerns for managing account holders’ data.
So how will this proposed ban work?
The legislation will impose a “duty of care” on social media companies, requiring them to ban children under 14 from social media platforms.
This means Instagram, TikTok, Facebook, Snapchat and other platforms would need to take “all reasonable steps” to prevent access by any South Australian child under the age of 14. They’d also have to ensure teens aged 14–15 could only access platforms with parental content.
Bans and limited access would be overseen by a state regulator. They would monitor compliance and impose sanctions, such as:
- warnings, infringement notices and fines
- court proceedings that impose corrective orders or civil penalties.
Legal action could also be taken against providers by either a regulator or parents on behalf of a child who has suffered significant mental or physical harm.
The proposed ban would also provide “exemptions” for beneficial or low-risk social media services (such as educational platforms), which are not yet identified.
What are the challenges of getting this to work?
While exemptions may relieve concerns for those opposed to an outright ban, it’s unclear how specific exemptions would be agreed upon, or how “low-risk” content would be defined.
Another significant challenge is the process by which children’s ages and parental consent mechanisms would be identified and tracked.
Age assurance and verification processes are not foolproof. They require strategies like self-reporting (which is easily circumvented), age verification by an adult (which raises privacy concerns for young people), or steps like uploading government ID (which raise data security concerns).
What is also unclear is how social media companies will respond to this latest move to force them to control platform access.
In other jurisdictions with similar bans – like Florida or Spain – these companies are notably silent. It may well be that to test the long-term viability of these bans, individuals and governments will need to take social media companies to court to prove the platforms have harmed children.
*Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
6 Comments
I think it would be more successful to prepare and legislate a 'design guide' detailing what is/isn't an acceptable standard for all social media platforms and then ban those unable to prove compliance with that design guide. It does stifle innovation but it is that innovation that is in many cases, causing the harmful effects.
Golly. Just watched the news item on this. They want to use facial recognition? Seriously? Big brother will track you forever. Perhaps Orwell had it wrong and it is 'big tech' we should be worried about and not our governments? (We need to be extremely wary of both btw.)
Methinks the reaction to this issue is akin the scares over 'heavy metal music', i.e. moral panic.
They already have to do the same for banks, operating vehicles, power accounts etc. Doing it for another form of tech (that has caused great harm) is not new.
Will it be 100% foolproof prefect, no. It never can be, Likewise people drive without licenses all the time and so do children. It will be another tool in the box. But the whole irony is that they are making this "think of the children" hand wringing law while completely allowing the even worse hate speech that openly causes direct harm and incites physical violence.
We welcome your comments below. If you are not already registered, please register to comment.
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.