Debate Over Online Content Embodies “Moderator’s Dilemma”
As the subject of online content moderation receives greater media attention, Internet services that rely upon the ‘Good Samaritan’ protections of 47 U.S.C. § 230 are increasingly the subject of legislative proposals.
These proposals tend to reflect one of two points of view. Some, like the controversial FOSTA/SESTA, enacted last year with troubling results, reflect the view that online services should moderate content more aggressively. Others, like Rep. Gosar’s H.R. 4027, reflect a viewpoint that online services should moderate content less aggressively — more specifically, only unlawful content. These conflicting views may exist within current Administration policy as well: even as the White House convened a meeting in August to encourage online services to respond more aggressively to violent extremist content, unnamed officials told journalists of plans to subject companies’ content moderation decisions to greater scrutiny.
In the quandary between whether to encourage more aggressive or less restrictive content moderation, one could argue policymakers face their own version of what Prof. Eric Goldman has described as “the moderator’s dilemma.” The moderator’s dilemma is a way of describing online services’ unappealing options in the absence of Section 230, i.e., an environment where services face broad liability for user-posted content. These sites may be social media services, cloud storage providers, or simply newspaper websites with comment sections, but all would face the dilemma of whether to more or less aggressively moderate content.
What would occur in the absence of Section 230? On the one hand, services could attempt to manage liability risks by more aggressively moderating content. But doing so would risk incurring liability based upon knowledge of potentially problematic third-party content that services acquired in the course of moderation. It also would risk liability from users whose content is removed under the more aggressive policies. On the other hand, services could acknowledge that even the most aggressive moderation strategies won’t suppress all liability-inducing content, and simply refuse to try. By serving more like a simple conduit for content, services would have less risk of liability arising out of moderation efforts.
In all likelihood, both strategies would be pursued by different services. Some would implement draconian moderation policies, while others would adopt a “head in the sand” approach. Users of sites that continue moderating with a far heavier hand would find only sanitized content, which would exclude outlier viewpoints and elevates those speakers willing to indemnify online services against liability. In short, these sites’ content would look more centrist, orthodox, and speakers would likely reflect wealthier, establishment viewpoints.
By contrast, a site that gives up on moderation altogether may resemble “anything goes” forums like 8chan. Certainly, these sites would face litigation in the absence of Section 230 protections, and a few might even be bankrupted by it. But given the ease of starting a website and the ability for website operators to locate offshore, suppressing inappropriate content through litigation alone is not possible. Across an Internet that only moderated unlawful content, abuse, intolerance, and extremism would be commonplace.
Both of these outcomes are undesirable. Draconian moderation would endanger the open nature of the Internet and further marginalize under-represented voices. Websites with no moderation at all would be plagued with off-topic content, trolling, and abuse, and would likely become inhospitable, particularly to younger users. Intolerant and extremist views would proliferate.
Section 230 cuts through this dilemma by limiting services’ liability for user misbehavior, while ensuring that sites also cannot be sued for taking down content that might violate policies or norms. This enables online services to implement site-specific policies to manage for inappropriate content without risking liability. Proposals to amend Section 230 that fail to preserve this flexibility are likely to increase the availability of problematic content, marginalize non-establishment voices, or both.