(New York, N.Y.) – Last week, Facebook announced the formation of a 20-member independent oversight body tasked with making decisions on contentious content issues, such as online extremism, hate speech, harassment, and user privacy and security. Facebook’s Oversight Board comes nearly two years after a damaging New York Times exposé detailed how Facebook COO Sheryl Sandberg and other top company executives worked to “delay, deny and deflect” bad news following public relations crises. The process that led to the Oversight Board’s creation and subsequent media announcements raise concerns that this is yet another public relations-driven measure in lieu of substantive change.
Even though Facebook CEO Mark Zuckerberg first outlined the board’s concept in November 2018, the Oversight Board is still only in its initial phase. Users will only be able to appeal to the board in cases where Facebook has removed their content. The board vaguely claims that “over the next few months” it will add the ability to review appeals from users who want Facebook to remove content. Additionally, despite emphasizing their commitment to creating a system that is “accessible to the people,” the co-chairs of the board have noted that they “will not be able to offer a ruling on every one of the many thousands of cases that we expect to be shared with us each year,” further watering down its effectiveness.
“Rather than work to enforce its own content policies, Facebook has chosen to make another cosmetic policy announcement in an attempt to appease critics. Facebook clearly outlines removal policies for extremist content in its Community Standards, but it inconsistently enforces those terms,” said Counter Extremism Project (CEP) Executive Director David Ibsen. “Moreover, when the company inevitably fails to remove dangerous content, it often shields itself under the umbrella of free speech concerns. Facebook’s Oversight Board is already making similar caveats and claiming that it is ‘committed to freedom of expression within the framework of international norms of human rights.’ Curiously, neither Facebook nor the Oversight Board cite other fundamental rights (e.g. right to privacy or right to security), as they may require the multi-billion-dollar company to alter its behavior dramatically.”
Clearly, Facebook’s latest measure is a reaction to widespread scrutiny from the public and policymakers. Lawmakers in Germany, for instance, are discussing ways to amend the country’s NetzDG online content moderation law to better its effectiveness. In the U.S., there remains bipartisan support to amend Section 230 of the Communications Decency Act to remove blanket liability protection for content. Rather than focus its energy on efforts to repair its public image, Facebook can support specific legislative and regulatory proposals, and end its significant lobbying efforts against government regulation—a move that CEP has previously recommended.
CEP has also previously called on Facebook to support amending Section 230 of the Communications Decency Act (CDA). Section 230 must be amended to remove companies’ blanket protections from liability for content posted by third parties on their platforms when that content is incontrovertibly known to be extremist in nature or otherwise harmful. CEP also called on Facebook to voluntarily release transparency reports about its efforts to monitor and remove extremist or otherwise harmful content and to support amending Securities Laws governing the Securities and Exchange Commission (SEC).
The CEP resource Tracking Facebook’s Policy Changes outlines the instances where Facebook has made reactive policy changes and failed to uphold its commitment to eliminating content that is hateful, harmful, and deceitful from its platform.