Contact Us

Disruptive Competition Project

655 15th St., NW

Suite 410

Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.

FTC Hearings #12: The FTC’s Approach to Consumer Privacy Day 1

The FTC held its 12th Hearing on Competition and Consumer Protection on April 9 and 10, 2019. The overall theme of the two-day hearing centered on the FTC’s approach to consumer privacy. This is the 12th part of a series covering the FTC Hearings, the second day of this hearing can be found here and the collection of posts on the series can be found here.

Like last hearing, FTC Chairman Simons presented opening remarks. Simons emphasized the positives and negatives of data collection stating that “we live in an age of technological benefits powered by data.” He made clear we need to evaluate our approach to privacy in a shifting world.

Panel 1 – Goals of Privacy Protection

The first panel of the two-day hearing began with moderator James Cooper, Deputy Director for Economic Analysis at the FTC, outlining three key questions:

  1. What do consumers want?
  2. Is there some reason firms aren’t responding?
  3. Is there something the government can do to improve things?

Cooper went on to discuss the “privacy paradox” (the paradox being surveys suggesting privacy is highly important while revealed preference indicating the opposite), whether the government should take action or not, and what harm any privacy regulation should be directed at.

The first panelist to speak was Neil Chilson, Senior Research Fellow for Technology and Innovation at the Charles Koch Institute. Chilson stated that to tackle such questions we first need to define what we mean by privacy. He proposed thinking about privacy in an abstract manner: the effect of a constraint on someone else’s use of data or information about you. Chilson stated his belief that we often tend to prioritize the concept of harm as the measurement by which we take breaches of privacy and that government intervention is more justified the closer to objective harm you get. Chilson also stated that we do ourselves a disservice when government attempts to solve certain problems farther away from the objective core of harm at it would make problems worse trying to solve them with that standard.

Paul Ohm, Professor of Law at Georgetown University, disagreed with Chilson’s abstract definition of privacy arguing it is slightly outdated. Ohm believes we need to update the kind of harms the FTC seems capable of addressing. Further, Ohm stated he doesn’t believe in the privacy paradox but rather the “privacy paradox paradox” arguing that it’s not surprising people are giving up their privacy when they can’t understand what exactly they are giving up and getting in return. Chilson agreed with Ohm that there is not really a privacy paradox just a failure to fully understand consumers’ choices.

Alastair Mactaggart, Chairman of Californians for Consumer Privacy, made clear that humanity is trying to address a side effect of just living our lives in the world. Mactaggart additionally stated the notion that consumers don’t have to use the technology that much of our data collection worries originate from is misleading at best. Ohm went further arguing for more rigorous evaluation of the claims from the other side that greater privacy restrictions would bring our world to a halt and Mactaggart agreed with Ohm argument that stronger privacy innovations would not necessary harm innovation.

Panelists were also asked what would or should the form of government intervention look like; Would we require more ex post or ex ante regulation? Ohm believes that we need more of both; however, he also stated his desire to avoid overly broad national regulation. Chilson outlined his support for ex post regulation as it has a lot of virtues over ex ante regulation. Ex post regulation has no need to attempt to address potential future harms, only harm in the now, allowing it to be specific and targeted rather than abstract. Mactaggart discussed his efforts on the California Consumer Privacy Act (CCPA) and its opt-out regulatory style as one positive example of government regulation.

Panel 2 – The Data Risk Spectrum: From De-Identified Data to Sensitive Individually Identifiable Data

The second panel was kicked off by Jules Polonetsky, CEO of the Future of Privacy Forum, who gave a presentation on de-identification. Polonetsky discussed the dangers of linkage attacks, direct versus indirect identifiers, and the concepts of differential privacy and secure multiparty encryption. Michelle Richardson, the Director of the Data and Privacy Project at the Center for Democracy and Technology (CDT), spoke about CDT policy recommendations for data regulation on issues such as de-identification and data protection. Richardson posed three key criteria to define what sensitive data is: Is the information intimate, used for high stakes, and immutable? She proposed heightened protections for such sensitive data, providing examples such as precise geolocation, biometrics, health information, and children’s information.

Panelists were first asked to speak specifically on the nature of de-identification and who controls such de-identified data. Aoife Sexton, Chief Privacy Office of Trūata, focused on issues of data control giving the example of the European General Data Protection Regulation (GDPR) and its challenge to data analytics. Deven McGraw, General Counsel and Chief Regulatory Officer at Ciitizen, talked about how users even want some control over their de-identified data. McGraw suggested treating de-identification similarly to the treatment of identified data, as de-identified data still does raise risks for identification, misuse, and abuse. McGraw stated the key is to be transparent and give people choices even with de-identified data.

Shane Wiley, Chief Privacy Officer at Cuebiq, agreed with McGraw that there are risks that de-identified or anonymized data can be linked back to publically available data. He also used his time to discuss what makes information sensitive and described a system of data categorization with three categories: known non-sensitive, known sensitive, and unknown. Wiley further raised the point that often what is sensitive data is subjective. There was general agreement from the panel that baseline protections for certain types of data is a good starting point.

Between the second and third panels, FTC Commissioner Noah Phillips made some remarks. Phillips emphasized that we are in the midst of a national and even international discussion of consumer privacy and that we first need to distinguish between the operations of a privacy regime and the underlying harms we ought to address. He further stated that rulemaking and regulations must come from Congress, additionally warning of the dangers of new regulations negatively affecting competition and entrenching certain companies. Phillips closed stating that laws alone are not enough for a proper shift in this space; we need to encourage consumers to take their privacy seriously and encourage businesses to alter their operations to respect privacy to a greater degree.

Panel 3 – Consumer Demand and Expectations for Privacy

The third panel of the day started out with the question of whether there are consumer expectations and demands relevant to creating a privacy policy. Laura Pirri, Senior Legal Director and Data Protection Officer at Fitbit, answered in the affirmative. Pirri argued that companies are very motivated to understand consumers’ desires because it is simply good business practice. Pirri described the early life of Fitbit data portability efforts as one example of companies taking such action. Heather West, Senior Policy Manager at Mozilla, agreed with Pirri that consumer expectations and demands are important and do hold weight in company actions. Lorrie Faith Cranor, Professor of Computer Science, Engineering, and Public Policy at Carnegie Mellon University, raised the issue of whether we can know what consumer expectations and demands really are. Cranor pointed out that some companies know and see consumer expectations and demands while others have more trouble. Furthermore, consumer expectations and demands may be unrealistic or unnecessary depending upon the service.

The panel then shifted to the questions of what these common consumer demands and expectations are and how we assess or measure them. Ariel Fox Johnson, Senior Counsel of Policy and Privacy at Common Sense, felt consumer demands differ from person to person and group to group. Johnson gave the example of the differing needs of children and teens. She outlined a parent’s concern of making sure children and teens are protected while recognizing that the three groups have slightly different privacy concerns and so their wants are different. In fleshing out the example Johnson stated that while parents really want their young children protected, they also want to know what’s going on in their teens’ lives, which often means less privacy for their teens, who more often than not want even more privacy from their parents. Johnson gave this as a clear example of consumer demands and expectations for privacy varying and coming in conflict with each other. Jason Kint, CEO of Digital Content Next, stated the goal is not just meeting consumer demands and expectations, recognizing they may vary, but maximizing trust in relationships between users and companies.

When asked if the privacy paradox exists panelists agreed it probably doesn’t exist and what is actually seen are only surface deep contradictions. Panelists also discussed the balancing of attributes of products or services with values like privacy. Avi Goldfarb, Professor of Marketing and Rotman Chain in Artificial Intelligence and Healthcare at the University of Toronto, stated it is important to remember that privacy is one attribute among many and while it is clearly beneficial there are tradeoffs just like other attributes. Consumer demand for certain tradeoffs, especially in the case of privacy, can create friction but also opportunity for companies to perform up to consumer standards and compete. Laura Pirri agreed there are tradeoffs between functionality but also pointed out that outside products or services tradeoffs can also be seen in data regulations – privacy considerations versus social good considerations being one example. Panelists agreed it’s about striking the right balance between privacy and values such as innovation, research, and competition. There was also some panel agreement on the purpose specification of data as a good start towards principles they want to see. Panelists agreed that not all consumers have the same expectations and the goal isn’t to just meet consumer’s basic expectations but to give them real choices, transparency, and control.

Panels 4 and 5 – Current Approaches to Privacy

The first day of Hearing 12 concluded with a double panel examining current approaches to regulating privacy, primarily focused on the European General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and existing US federal privacy laws and enforcement. Professor Margot Kaminski, University of Colorado Law, began the panel with an overview of these laws and set out a taxonomy of the key features and differences between modern privacy regimes:

  1. Consumer protection vs. data protection regimes
  2. Omnibus vs. sectoral regimes
  3. Notice and choice vs. ‘something else’ regimes
  4. Individual rights vs. compliance regimes
  5. Hard law vs. soft law regimes

Panelists then discussed the goals of privacy legislation and the virtues and shortcomings of the GDPR and CCPA in context of the consumer rights, corporate obligations, and enforcement mechanisms that should characterize a new US federal baseline privacy regime.


Panelists broadly supported the consumer rights established in the CCPA, but found substantial areas of concern about the impending law. Shaundra Watson, BSA | The Software Alliance, argued that a federal law can and should be stronger than the CCPA, doing more to protect consumer rights by reaching third party uses of data. David LeDuc, Network Advertising Initiative (NAI), agreed, stating that the CCPA is an unusual and inherently flawed privacy framework because it focuses primarily on regulating “the sale” of data, noting that first-party data uses can be harmful while third-party uses can be beneficial.


Panelists recognized that it would not be practicable or wise to attempt to import the GDPR wholesale into American law, but also saw several positives in the European approach. Markus Heyder of the Centre for Information Policy Leadership stressed the benefits of the GDPR’s focus on “organizational accountability” and spoke in support of the use of certifications and codes of conduct. Laura Moy, Georgetown Law Center on Privacy and Technology, approved of the GDPR’s provisions on purpose limitation, data minimization, and fining authority. Fred Cate, Professor of Law at Indiana University, offered a more critical assessment, arguing that the GDPR sets out numerous controls but fails to establish core goals, making it difficult for regulators to set standards.

Perspectives on US Legislation

Panelists were united in support for a new US privacy regime at the federal level. Shaundra Watson explained that the US sectoral approach developed in conjunction with emerging threats to privacy; however, the lines between different data categories and industry types have blurred, leaving the current framework no longer fit for purpose. David LeDuc cited the efforts of the recently established Privacy for America coalition (of which NAI is a member) that supports outright bans on certain harmful and unexpected data practices and the creation of a new Data Protection Bureau at the FTC.

While panelists agreed on high level principles, differences emerged over fining authority, a private right of action, and federal preemption of state laws. Laura Moy supported a private right of action, noting that historically federal agencies could not be trusted to defend disadvantaged populations and argued that for consumers, a strong patchwork of state legislation would be better than a weak federal standard. Professors Kaminski and Cate disputed whether the prospect of large fines create incentives for public/private partnership in regulation.

In the second panel on the current approaches to privacy, partners at major law firms were presented with a series of five fact patterns and asked to evaluate how these scenarios would be regulated under both the GDPR and CCPA. Lother Determann, Baker McKenzie, began by stating the “data genie is out of the bottle” and that it is time for a privacy approach that focuses on harms, arguing that GDPR had fully missed the mark on this issue. Alan Raul, Sidney Austin, offered two possible sources for identifying privacy harms that should be regulated, including a recent UK “Online Harms White Paper” and the Spokeo case, establishing precedent that an intangible injury can be actionable when grounded in a statutory privilege or common law. Jay Edelson, Edelson PC, argued that de-identification is largely a myth, citing the reidentification carried out by academics on the Netflix Prize data mining contest. He also offered a defense of consent-based privacy regimes, positing that they provide an important restraint on companies from changing their data practices at will.


Some, if not all of society’s most useful innovations are the byproduct of competition. In fact, although it may sound counterintuitive, innovation often flourishes when an incumbent is threatened by a new entrant because the threat of losing users to the competition drives product improvement. The Internet and the products and companies it has enabled are no exception; companies need to constantly stay on their toes, as the next startup is ready to knock them down with a better product.