Lawmakers across the political spectrum at both the federal and state-level are increasingly proposing legislation aimed at creating additional age restriction and “assurance” measures. These measures are branded as a means to mitigate the perceived negative impacts that digital services may have on younger users. Many of these proposals include requirements for organizations to provide “age estimation” or “age verification” for users under a certain age. While on the surface this may appear to be a simple act of “due diligence” on the part of the businesses that provide such services, the act of estimating or verifying a users age is a much more complicated initiative rife with concerns regarding feasibility, privacy, and unintended consequences for the very users these proposals are intended to benefit and protect.
The Current State of Play – “Age Attestation”
Currently, digital services use a variety of different tools to comply with standards established by the Children’s Online Privacy Protection Act (COPPA). COPPA dictates that digital services must provide specific protections for users under 13 years of age – an age threshold that was established as a bright line after a lengthy negotiation process that accounted for both the rights of younger users and businesses’ compliance burden. To avoid collecting data from users under 13, some businesses shut down various services when COPPA went into effect due to regulatory complexity – it became easier to simply not serve this population. Unlike COPPA, recent legislative proposals lack specificity and flexibility for businesses – whereas under COPPA, businesses could choose whether to shut down services for children or develop specific guide rails for child-directed services, these other bills would require businesses to employ age verification or estimation measures to avoid liability regardless of the nature of their service.
So how do those businesses determine whether a user is older than 13 and allowed to access services? Many businesses rely on age “self-attestation” – when a user attempts to create an account they must agree to a “terms of service” which, generally, includes a provision that requires the user to self-attest to being at least 13 years old. This is similar to when a user tries to access a website offering alcoholic beverages, and the site requires that user to check a box asserting they are at least 21 years old or to input their date of birth. However, it is on the user to be forthcoming and honest, and a business offering alcoholic beverages is not held liable for any misrepresentation. The business does not collect any additional information about the user to confirm the data they entered is correct or that their age assertion is indeed accurate. If the business believes that the user is below the age requirement and has been dishonest about their age, this is a violation of the “terms of service,” allowing that business to deny service or access to that user. However, it is unlikely such self-attestation mechanisms would continue to be acceptable means for determining a user’s age under recently established laws and proposals.
“Age Estimation”
In 2022, California Governor Gavin Newsom (D) signed AB 2273, “the Age-Appropriate Design Code” (AADC) into law. The AADC is modeled on the United Kingdom’s AADC, and has now been introduced in at least similar form (but not enacted) in several other states such as Maryland, Minnesota, Nevada, and New Jersey. Among other provisions, the California law would require “a business that provides an online service, product, or feature likely to be accessed by children” to “estimate the age of child users with a reasonable level of certainty”. But what is an age estimation of “reasonable certainty?” And what can and should businesses rely on for that estimation in light of the business liability established under the law?
Some third-party vendors offer age estimation or age assurance services that employ facial analysis or other technological means. There are considerable concerns about whether and how these services collect or retain personal identifiable information or use other facial recognition tools.
The California AADC would authorize the state attorney general to seek an injunction or civil penalty against any business that violates its provisions and hold violators liable for a civil penalty of a maximum of $2,500 per affected child for each negligent violation or $7,500 per affected child for each intentional violation. To achieve compliance and have enough evidence to avoid the proposed penalties for violations, it is likely that age estimation would amount to age verification. And this wouldn’t just apply to the younger users the AADC intends to target – it would require the collection of additional sensitive data about all internet users.
It is worth noting the distinction between the enforceability of California’s AADC vs. the U.K. AADC. In the U.K, it is possible for a business to comply with U.K. law while not following the U.K. AADC. In fact, the U.K. Data Protection Act (“DPA”) explicitly states that “the code was designed by the U.K. Information Commissioner’s Office to meet its obligations under the U.K. DPA to prepare a code or suggestions for safe practice but explicitly states that a “failure by a person to act in accordance with a provision of a code issued under section 125(4) does not of itself make that person liable to legal proceedings in a court or tribunal.” U.K. legislators avoided imposing “age verification” or similar higher thresholds upon organizations, recognizing the tension between higher accuracy and further data collection.
“Age Verification”
While the California AADC and similar state proposals requiring “age estimation” would likely amount to “age verification” when coupled with liability provisions, some states such as Arkansas and Utah have passed laws to explicitly mandate use of age verification for online users.
Age verification requirements raise questions regarding conflicts with data minimization principles and other consumer data privacy protection measures, including statutes aimed at protecting biometric privacy. To effectively conduct age verification, businesses would be required to collect additional data – including collecting and storing their geolocation data to ensure they do not reside outside of the state when confirming that they are of age to be using these services, which would result in additional volumes of data specifically about children. But the need to collect additional volumes of data would effectively apply to all users by nature of needing to discern between adult and “minor” users. Further, parents or guardians of younger users would likely be required to provide sensitive financial information (i.e., a credit card) and personal identifiable information (i.e., a government identification card) when consenting to and providing age-verification on behalf of younger users. The Commission Nationale de l’Informatique et des Libertés (CNIL) analyzed several existing online age verification solutions but found that none of these options could satisfactorily meet three key standards: 1) providing sufficiently reliable verification; 2) allowing for complete coverage of the population, and; 3) respecting the protection of individuals’ data, privacy, and security.
Age verification requirements could also create disproportionate challenges for those in lower income communities that may be unbanked, for those that do not have a valid government identification, or for children living in homes where parents or guardians are not technologically literate.
Businesses may be forced to collect personal information they don’t want to collect and consumers don’t want to give, and that data collection creates extra privacy and security risks for everyone. This is compounded when verification measures are, again, tied to liability. To ensure businesses are able to prove that they complied with verification requirements, they would be incentivized to store user data rather than delete it upon verifying.
Conclusion
While mandating age assurance in exchange for access to services may on its surface appear to be a simple tool to implement additional safety measures for younger users, it is much more complicated and lawmakers should understand what else may be at stake. When the Communications Decency Act was passed, there was an effort to sort the online population into kids and adults for different regulatory treatment. That requirement was struck down as unconstitutional because of the infeasibility. Yet, after 25 years, age authentication still remains a vexing technical and social challenge.
Creating safeguards intended for younger users is commendable, but these mechanisms risk being counterproductive and may create serious privacy concerns.