Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Close

The Hidden Costs and Data Vulnerabilities of the GUARD Act

As Congress considers the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, policymakers face a critical juncture in technology regulation. While safeguarding younger internet users remains an essential priority for the technology sector, the proposed legislation introduces a fundamentally flawed regulatory architecture that risks stifling American leadership in artificial intelligence while undermining user privacy. And most importantly, the GUARD Act’s provisions would lead to the opposite outcome it intends – making children less safe online. 

The GUARD Act would immediately cut off access to most AI chatbots, companions, and other tools for all users who have not verified their ages. Because the bill’s definitions are overbroad and vague, they could apply to almost any dynamic text generator, forcing age verification on everyday tools like search engine summaries, homework aids, customer service bots, and even internet service providers or blogs with comment sections. If a future court deems verification measures insufficient, the bill imposes steep civil and criminal penalties. These requirements raise serious privacy, access, and safety compliance concerns which have led numerous courts to strike similar bills down across the states.

Starting with privacy, the bill’s verification mandates go against several basic security tenants. To comply with this law, covered entities would have to constantly collect and store sensitive personal information, such as facial scans or government IDs. This contradicts common-sense practices where companies seek to  collect as little personal data as possible, especially for kids online who deserve even higher privacy protections. Forcing companies to build giant databases of user identities would create gold mines for hackers and identity thieves. It would also lead to dangerous outcomes for younger internet users. For example, research indicates that when pushed, younger users will move toward non-compliant, darker corners of the internet where they are exposed to much more problematic content.

Beyond its massive security vulnerabilities, age verification raises serious free speech concerns. Requiring covered entities to independently verify the identity of every single user, adult or child, would destroy the right to online anonymity. Courts have long recognized that forcing people to show a government ID to access information chills free, safe, and private internet browsing. In sum, if adults are forced to upload a government ID or submit to a biometric face scan to use most common AI search tools, they will likely just not use them at all or self-censor to avoid looking up sensitive topics like medical conditions or political issues.

The bill also institutes legally unclear definitions and compliance mechanisms which would force companies to spend considerable time asking courts, as opposed to trust and safety experts, for guidance on how to protect kids online. For example, the bill does not enable companies to determine which age verification methods, if any, are “reasonable.” This is further complicated by high error rates with current age verification technologies. Likewise, the bill fails to concretely define key terms such as an “AI companion” or an “AI chatbot,” making it unclear which products or services are even implicated and when. Even when a business definitively falls within the law’s scope, it is difficult to objectively classify the tone of AI responses to users, or when these responses “induce” a user to a given course of action, adding more confusion. 

Put altogether, until costly litigation establishes more concrete guidelines, the bill’s steep civil and criminal penalties will incentivize companies to limit product functionality to avoid liability rather than invest in better safety measures. Such provisions risk arbitrarily and inconsistently penalizing companies and severely limiting their products’ scope, undermining the tech industry’s future. 

While keeping children safe online is a vital shared priority, the GUARD Act relies on a deeply flawed approach that undermines privacy, civil liberties, and safety. Moreover, the legislation ignores that a majority of teens already use chatbots in productive and safe ways, viewing them positively, for example, as learning aids. Policymakers should instead embrace workable solutions that provide families with stronger controls over their own devices, funding digital safety education, and supporting privacy-first safety tools are much smarter paths forward. These measures will successfully protect younger users without hindering American innovation.

Privacy

Trust in the integrity and security of the Internet and associated products and services is essential to its success as a platform for digital communication and commerce. For this reason we’re committed to upholding and advocating for policymaking that empowers consumers to make informed choices in the marketplace while not impeding new business models.