Contact Us

Disruptive Competition Project

655 15th St., NW

Suite 410

Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.

FTC Hearings #7: Artificial Intelligence, Algorithms, and Predictive Analytics

Last week, the FTC held the seventh session of its hearings on Competition and Consumer Protection in the 21st Century at Howard University School of Law with the spotlight on algorithms, artificial intelligence (“AI”), and predictive analytics. The overarching themes of this two-day hearing were to understand the fundamental aspects of AI and algorithms, how AI could impact consumer protection, and to discuss antitrust and regulatory issues for AI.


Panelists took turns describing the many real applications of AI. Dana Rao, Executive VP and General Counsel at Adobe, demonstrated that AI has the ability to segment out a section of a photo and seamlessly import another picture of the desired lighting or background in to replace that section. However, he noted that for AI to work this way there would need to be lots of data points and artists in order to train the AI:

“How do we make AI beneficial? You need access to data, and access to a variety of data. That kind of access to data [is what will] eliminate bias.”

Dr. Michael Abramoff, founder and CEO of IDx Technologies, discussed an AI-assisted tool that detects more than just a mild level of diabetic retinopathy. Retinopathy is an eye disease found in adults with diabetes. Angela Granger, from Experian, talked about the uses of AI in credit scoring. Teresa Zayas Caban, Chief Scientist at the Office of the National Coordinator for Health Information Technology, pointed out that AI now has the potential in healthcare to detect metastatic cancers and observe lesions that doctors cannot identify with their naked eye. And lastly, Melissa McSherry, Senior VP and Global Head of Data Products at Visa, spoke about how AI is rapidly improving at uncovering which financial transactions are fraudulent and which ones are legitimate.

While real life applications of AI are impressive and will inevitably impact society in the years to come, Jennifer Wortman Vaughan, Senior Researcher at Microsoft Research, focused her talk on fairness and intelligibility in machine learning systems, noting that biases that negatively impact AI’s potential can arise through data sets. She believes that to reduce bias:

“We must choose our metrics carefully with these trade-offs in mind. Principles cannot be treated as afterthoughts and must be considered at every stage of the machine learning pipelines. Technology can be part of the solution – we should admit our mistakes and learn from them.”


While panelists acknowledged valid consumer protection concerns, they generally counseled against overly-cautious regulation and were instead in favor of allowing AI room to experiment and grow before turning to burdensome regulation. Ryan Calo, Associate Professor at the University of Washington School of Law, cited California’s new law requiring bots to communicate they were not human as a example of regulators enacting laws prematurely:

“Communicating with bots is a new form of communication and one that needs some breathing room, and one potential harm is that these emerging tech will freak us out and we will overreact. I’m not saying we should top down everything…”

Panelists also noted that in terms of consumer protection, human behaviors can often be inherently contradictory. As Fred H. Cate, Professor of Indiana University of Law, believes that:

“People’s concerns are highly subjective and contextual. Are we talking about my data or your data? Types of concerns that individuals have are really different from the types of concerns that society has. Individuals don’t always make rational choices, but they know what they’re getting into and they do it anyway.”

Marianela Lopez-Galdos, Director of Competition & Regulatory Policy at the Computer and Communications Industry Association (CCIA) and a Project DisCo contributor, pointed out that algorithms are created by humans and therefore subject to human biases and errors. She stated that a certain balance between these biases and the value of AI should be weighed:

“Human beings and decisions are not perfect either, we can’t hope to have all decisions made by machines be perfect. Deploy AI systems knowing that they are imperfect because they bring added value to humanity and balancing those tradeoffs is going to be key for the future. That is not necessarily at this moment the right approach to take advantage of the full potential that machine learning has. We’re only in the nascent moment and if we start putting barriers to it we’re not allowing engineers to test the limits of it.”


The last panel focused on legal and regulatory questions surrounding the AI debate. Joshua New, Senior Policy Analyst at ITIF’s Center for Data Innovation, pointed to his latest work How Policymakers Can Foster Algorithmic Accountability to make his argument that regulators should only enact regulation where it is required. He pushed for more action and policy debates in the United States, arguing that while Europe’s GDPR is probably detrimental to AI, at least they are being proactive about it.

Irene Liu, General Counsel at Checkr, agreed with New that uncalled for regulation is harmful to AI and there are already incentives for AI developers to fix biases or mistakes in their algorithms. Furthermore, she remarked on what companies are already doing in terms of self-regulation, which is the notion of “privacy by design.” She also cautioned against over-regulation, noting that technology companies already face a plethora of regulations through different laws and agencies:

“The FTC Act and Section 5 is broad enough that you could apply any technology to it. It’s important for regulators that there are a number of regulations that really puts that checks and balances, [and] regulators should think about that holistically instead of adding another law that they’re regulating.”

Instead of more regulations, Liu proposed that the FTC provide guidelines instead so that companies can understand better how to conform to the appropriate standards and rules.

The eighth set of FTC hearings, on common ownership, will take place on December 6 at the NYU School of Law.


Some, if not all of society’s most useful innovations are the byproduct of competition. In fact, although it may sound counterintuitive, innovation often flourishes when an incumbent is threatened by a new entrant because the threat of losing users to the competition drives product improvement. The Internet and the products and companies it has enabled are no exception; companies need to constantly stay on their toes, as the next startup is ready to knock them down with a better product.