Age assurance (reliably estimating or securely verifying someones age) has become an important – and controversial – topic recently. With rising concerns over the safety and well being of children online, it has been advocated as a possible tool to restrict access to certain content for minors. The underlying argument being that we also more or less successfully limit children’s access to certain content (porn, violence) and products (alcohol, cigarettes) in the physical world. Yet online age assurance raises thorny questions. And privacy is not one of them, really.
The Electronic Frontier Foundation is opposing any form of age gating online, out of privacy and censorship concerns. I personally signed a joint statement of security and privacy scientists and researchers on Age Assurance earlier this year, to call for a moratorium on deployment plans until the scientific consensus settles on the benefits and harms that age-assurance technologies can bring, and on the technical feasibility of such a deployment. I have worked on this topic for over a decade now, and wrote (together with my colleagues) about the risks of even privacy friendly forms of identity management before. That work is especially relevant today, as privacy friendly forms of identity management, and age verification in particular, are entering the market. And even though I firmly believe there is certain content online that should not be accessible to minors, I also believe that it is important to step back and careful consider the options and their ramifications.
Roughly speaking there are four problems. First of all, bad approaches to age verification or age estimation, that heavily intrude in our private lives, are already in use today. These are putting age assurance in a bad light. Truly privacy friendly forms of age verification do exist though, and are in fact used in practice as well.
Second, even for privacy friendly technologies for age verification there are restrictions that limit their usefulness in practice. I’ll explain those in some detail below.
Third, current proposals for using age verification are overly broad, aiming to place age restrictions on social media in general, and on ‘controversial’ content. In fact, all technologies for age assurance, including even the most privacy friendly ones, carry the risk of function creep: once the infrastructure to reliably enforce age restrictions is in force, it can be imposed on increasingly larger sets of websites and services.
Finally, age assurance raises important questions about control and digital autonomy, as the global infrastructural needed to implement it is already dominated by large platforms.
Many different methods for age estimation or age verification exist. For age estimation, tools like AI based behavioural monitoring or biometric scanning are used. This can be done locally on your own device or remotely by the service you use. Either way, age estimation are by their very nature quite privacy invasive (as they are often identifying and able to deduce other personal traits beyond age as well). Such tools for age assurance would enable user tracking across age restricted services (and worse), so these should not be used.
Age verification systems need to rely on an authoritative source for the age. In the most straightforward implementations, a government issued identity document must be presented (either scanned or digitally issued) to prove your age. Clearly this mechanism is identifying, again allowing user tracking. More privacy friendly systems are wallet based. Such a wallet is an app on a smartphone that allows users to download a government ID once as a so called Attribute Based Credentials, and selectively show only parts of it (like proving you are over 18) to a service provider online. Such proofs are guaranteed to be non-identifying and in fact unlinkable, making user tracking fundamentally impossible. In fact, such an identity wallet has properties quite similar to using a government issued ID to prove your age in the real world. Clearly the act of getting such an ID is strict and quite identifying. Yet you can use that ID a) without the government that issued it knowing this, and b) without the person behind the bar remembering your name (unless they have a photographic memory). Examples of such privacy friendly identity management apps are Yivi (developed by colleagues) and the upcoming European Digital Identity Wallet - provided the latter is implemented in the right way.
This shows that (contrary to age estimation) implementing age assurance through age verification does not necessarily violate privacy in a strict sense, and does not enable (state) surveillance by itself. There are still challenges though.
To be effective, age verification should make it difficult enough to fake your age. In the real world, your picture on your government ID card prevents it from being shared with most other people. Online, a similar facial check by the service would be quite invasive. An offline alternative to prevent sharing is to bind the wallet and the (age) credentials it contains tightly to one particular device and its owner. Device binding can be done by securely storing the credential in hardware. Holder binding could again be based on biometric verification, but at least that would happen only locally on our own devices.
Suppose we agree on using a system for age verification that is truly privacy friendly, as sketched above. What would that mean in practice?
First and foremost, requiring age checks for minors implies requiring age checks for everyone. Whereas in the physical world a bartender does not need to ask me for my ID as I am clearly over 18, this heuristic does not work online. So everybody that wants to access age restricted content or services online will need to prove their age.
As age verification systems are based on government IDs, anybody without an official government ID, or without one that is accepted by the particular system for age verification used, is excluded. As the previous paragraph makes clear, this has a much larger impact in the digital domain.
Moreover, even though the age verification system used is technically speaking super privacy friendly, ordinary people may not necessarily believe this is the case and still suspect that the government is spying on their browsing habits. Especially if age verification is integrated in a more general system for identity management (like the proposed European Digital Identity Wallet) that will also be used to log in to a government website proving your full identity. In the physical world we have a proper understanding and more or less complete view of what happens when we show our ID to the bartender (and again, if we look old enough, we don’t even have to); this is completely missing in the digital world. Distrust in the age verification system used may lead to chilling effects, where people avoid age restricted content. Or it may push people to alternative sites offering similar content without age restrictions.
An important factor affecting practical deployment is the level of assurance that must be achieved. As we saw above, a strict form of device and holder binding implies that wallet based systems can only be implemented on significantly modern smartphones with proper trusted hardware support. As a consequence, people without such smartphones will be excluded. This includes minors, for which there already is a significant push to raise the age at which they should be exposed to smartphones. And this would make certain age restrictions – like ‘over 13’ – unenforceable in practice. Making device and holder binding less strict may allow simpler devices to be used for age verification, but in essence any wallet based system requires a smartphone to run.
The level of assurance also affects when one’s age needs to be verified. For account based online services, the least invasive approach would be to only verify the age when an account is created - and perhaps re-verify it every once in a while. But this makes abuse easy: a minor could simply ask an older friend or relative to prove their age when creating an account. (The privacy friendly nature of the age verification process prevents the detection of such credential sharing.) Such abuse can only be prevented if the age is verified every single time the account is used to access the service, because an older friend or relative may not be around all the time. But this adds friction to the sign in process, unless age verification is very tightly integrated into the sign in process itself. For example because it is integrated into the operating system (as Apple is now rolling out in the UK) or the password manager used. This raises other concerns, however, that I will discuss further below.
From a security standpoint, I would only consider age verification effective if it is actually checked every time, but a pragmatic approach might be to settle for age verification at account creation only, and accept that this does allow for age credential sharing. Even such an approach is clearly more effective than a simple ‘I am over 18’ button on a porn site, which according to the European Commission does not comply with DSA requirements! Still a proper assessment of the actual effectiveness of imposing a particular means of age assurance should be performed before imposing it, given the overall implications of deploying an age assurance infrastructure. This should also take into account the fact that age restrictions for certain content and services depends on jurisdiction, which people may circumvent by using a VPN.
Platforms for age verification, even if they are privacy friendly, increases the risk of function creep, because it becomes very easy for any service to ask for age information, and get a trustworthy answer: no more lying about your age, even in forms for websites that ask for your age ‘just because’. This is especially the case if verifying age is done at the device or operating system layer without any user interaction.
This is a risk because, even without a clear legal requirements, website owners or service providers may feel morally compelled to impose overly broad age restrictions on their content or services.
There are technical defences against these forms of over-authentication. They require any service provider that wants to enforce a certain age limit to obtain a government-issued digital certificate to prove they are entitled to enforce it. And it requires the wallet to first verify this access certificate with every age verification request, before actually proving the age of the wallet holder. Unfortunately, neither Yivi nor the European Identity wallet implement the verification of such access certificates. Nor do other systems as far as I know.
Such a technical measure only truly works if governments really constrain themselves and 1) only issue such access certificates for service for which a legal requirement for age assurance exists, and 2) only impose such legal requirements on services for which there exists broad consensus that this is strictly necessary. The current push for imposing age restrictions on the access to social media in general is not a good sign in that respect. For some, even messaging systems like WhatsApp and Signal are considered social media. And even with a narrower definition of social media, there is a difference between non-algorithmic systems like Mastodon or BlueSky on the one hand, and highly toxic and addictive systems like Snapchat and TikTok on the other hand. A ‘quick fix’ relying on age verification diverts attention from the much more fundamental and much more important question of how such systems should be designed to be less toxic, less addictive and less unsafe. (And how to force current service providers to apply such designs.)
Perhaps an overarching principle should be enshrined in the law: it is unlawful to impose age restrictions, unless specifically required by law.
Responsible use of social media also improves the well-being of children. Whereas the discussion seems to focus on a binary access/no-access decision based on age, we should broaden the scope to include a more fine grained approach where the age of a user is used to tune the user experience. For example to offer chat groups or online communities accessible to children only. On the other hand, such safety concerns should not lead to over-protection and censorship of less mainstream content.
Integration of age verification at the device level in the operating system creates other issues (as more elaborately explained in this blog). In the case of mobile devices, this would outsource the age verification process to two dominant American tech companies: Google and Apple. These companies then get to decide how to verify age, and also which apps and services get access to the verified age.
Even if age verification is implemented as an independent wallet app on a smartphone, this raises digital autonomy concerns, as the dominant smartphone operating systems are Google’s Android and Apple’s iOS. There are Google-free Android alternatives (/e/OS, Graphene, …) that run most of the Android apps without problems, but not the German implementation of the European Digital Identity wallet, apparently. Such small ‘details’ do matter. (And for inclusiveness reasons solutions for age-verification that do not require a smartphone should be offered as well.)
Calls for restricting access to certain online content and services based on age are getting louder, with some countries in fact already implementing such bans. While this is certainly a legitimate approach for certain very specific content, there are significant risks to the overly broad application of age gating. Censorship and function creep looms. And shutting out children from all social media throws the baby out with the bath water.
The speed with which this is happening leads to the use of very invasive technologies for age assurance based on biometrics or behavioural profiling to estimate age. These carry significant surveillance and privacy risks. Wallet based age verification systems in theory offer a privacy friendly approach, provided they are implemented well and offer technical countermeasures to prevent over-authentication. None of the currently available systems do, unfortunately. And even if they did, age verification systems also require solid legal guardrails to reduce the risk of function creep.
The crux is that there are privacy friendly ways to do age assurance online - so privacy concerns are pretty much a red herring and mostly irrelevant in the discussion. Effectiveness, chilling effects, over-authentication, function creep, censorship, digital autonomy, and even usability are the relevant topics we should be talking about.