View in browser
|
Hello John,
The world has changed over the past year and so have digital social spaces. We here at ADL’s Center for Technology and Society have been just as amazed, shocked, and concerned as you. Let’s catch you up to speed with what we’ve been working on. | |
Social Media Isn’t Getting Better
Our Online Hate and Harassment survey shows that despite hiring thousands of content moderators and training artificial intelligence software to take down online hate speech, technology companies have not made their social media platforms safer or more respectful. Forty-one percent of Americans who responded to the survey said they had experienced online harassment in this year’s survey, comparable to the 44% reported in
ADL’s 2020 Online Hate and Harassment report. Severe online harassment—sexual harassment, stalking, physical threats, swatting, doxing and sustained harassment—also remained relatively constant compared to the prior year, experienced by 27% of respondents, not a significant change from the 28% reported in the previous survey.
Social media platforms have created a problem so preposterously large, the solutions they currently propose to fix it are inadequate. The algorithms they’ve refined to drive their business models determine what people see on many social media platforms and play a significant role in amplifying disinformation and hate speech. Platforms build content-recommendation models to feed users posts, news and groups that maximize their engagement—and what tends to keep users logged on longer is divisive, hateful content. | |
Facebook Oversight Board Upholds Trump’s Suspension
While the Board made the right decision, it only puts a fig leaf on the larger problem of hate on the platform. Read ADL’s full statement. Daniel Kelley, CTS’ associate director, also weighed in on the Board’s decision on May 5. | |
Rep. Eshoo (D-CA): ADL Knows What It’s Talking About
“Exposure to Alternative & Extremist Content on YouTube,” found that despite claims the platform changed its algorithm, YouTube still frequently recommended videos from alternative or extremist channels when people watched a video from those channels.
Approximately one in ten (9.2%) of the study’s participants watched at least one video from an extremist channel and two in ten (22.1%) watched at least one video from an alternative channel, a gateway to extremist content. This study was led by Dartmouth professor Brendan Nyhan and came from his time as a CTS Belfer Fellow.
The report garnered the attention of Congress, with the House Committee on Energy and Commerce sending a letter to Google CEO Sundar Pichai citing the report in voicing its concerns on YouTube’s amplification of extremist and alt-right content.
But the committee’s engagement with the report didn’t end there. It was referenced by Silicon Valley’s Rep. Anna Eshoo during the March 25 Congressional hearing on social media’s role in spreading disinformation with Pichai, Facebook CEO Mark Zuckerberg, and Twitter CEO Jack Dorsey (check out his
bitcoin clock, by the way—we thought it was a doomsday countdown machine). Rep. Eshoo, in her trademark no-nonsense style, grilled Pichai on YouTube’s role in helping radicalize users (Google owns the platform).
Here’s the transcript of a chunk of their exchange. You can also view her comments and Pichai’s response at more length here.
Eshoo: Last month, the Anti-Defamation League found that YouTube amplifies extremism. Scores of journalists and researchers agree. And here's what they say happens: a user watching an extremist video is often recommended more such videos, slowly radicalizing the user....So my question to you Mr. Pichai is, are you willing to overhaul YouTube's core recommendation engine to correct this issue? Yes or no?
Pichai: Congresswoman, we have overhauled the recommendation system. I know you've engaged on these issues before, pretty substantially…
Eshoo: Mr. Pichai, yes or no? We still have a huge problem. Are you saying the Anti-Defamation League doesn't know what they're talking about?
Well, shucks. | |
Facebook’s Bad Grade
Ensuring transparent policies against hate and harassment is critical. CTS regularly speaks to tech platforms and publicly holds them to account when necessary. But stating a clear policy isn’t enough. For example, CTS scored a clear win with our friends at Facebook when they (at long last) changed their policy on Holocaust denialism after nine long years of sustained advocacy. But then we wanted to look more closely at how well Facebook, and other platforms, enforced their policies, so we built our
Online Holocaust Denial Report Card and awarded letter grades based on their performance.
How did they do? Let’s just say the biggest ones didn’t make the honor roll.
Facebook got a “D.” Last October, the company stated it would ban all Holocaust denial content, a decision that took nearly a decade. In our investigation, we found violative content on the platform three months after the announcement.
Twitter, YouTube, TikTok, and Roblox all got a “C” for their efforts. Reddit, Discord, and Steam earned a “D.”
Twitch was the only platform to earn a grade above middling or lousy. It got a “B.” Twitch and Twitter were the only platforms that took immediate action when Holocaust denial content was reported by anonymous users. | |
Two CTS Projects Selected as Finalists for Fast Company’s World Changing Ideas Awards!
The magazine honors businesses and organizations driving positive change in the world. More than 3,000 entries were submitted and two of our projects were named as finalists.
-
Stop Hate for Profit, our campaign last summer with a coalition of civil rights and advocacy groups urging advertisers to pause their ad buys on Facebook until the platform addresses the spread of hate speech and disinformation, has been selected as a finalist in the Social Justice category.
-
Disruption and Harm in Online Games Framework, our comprehensive framework designed by the Center for Technology and Society and the Fair Play Alliance to identify and root out disruptive conduct in the gaming world, has been selected as a finalist in the Media & Entertainment category.
| |
Kudos to Twitch
Proving once again that social media doesn’t have to devolve into a cesspool of hate and disinformation, our report shows that Twitch’s experienced community moderators did a good job making sure four high-profile livestreaming events co-hosted by prominent politicians such as Rep. Alexandria Ocasio-Cortez were positive, inclusive spaces. For a solid write up of the report,
Engadget has a great story.
Twitch was the only platform to earn a grade above middling or lousy. It got a “B.” Twitch and Twitter were the only platforms that took immediate action when Holocaust denial content was reported by anonymous users. | |
CTS Recommends
“How Facebook Got Addicted to Spreading Misinformation,” MIT Technology Review. Artificial intelligence reporter Karen Hao takes a deep dive into the troubling consequences of Facebook’s relentless pursuit of growth.
“Revealed: the Facebook Loophole That Lets World Leaders Deceive and Harass Their Citizens,” The Guardian. An investigation by tech reporter Julia Carrie Wong looks at how Facebook enabled leaders across 25 non-Western countries to use the platform for political manipulation.
“Inside the All-Hands Meeting That Led to a Third of Basecamp Employees Quitting,” The Verge. ADL might have played a teeny, tiny role in the exodus.
"The first part of the meeting was devoted to discussing the events that had unfolded in the company’s internal Basecamp chat last month, in which an employee had cited the Anti-Defamation League’s “pyramid of hate” to argue that documents like the “funny” names list laid a foundation that contributes to racist violence and even genocide."
The Lawfare Podcast: The Challenges of Audio Content Moderation. Lawfare talks to Sean Li, former head of trust and safety at Discord and a friend of CTS, about what makes audio content moderation different from text-based moderation.
“Google Turmoil Exposes Cracks Long in Making for Top AI Watchdog,” Bloomberg. Current and former Google employees and AI researchers speak about the company’s handling of claims of harassment, racism, and sexism.
“A Man Bragged About Storming the Capitol on a Dating App. Then His Match Reported Him to the FBI,” CBS News. Taking the law into your own hands has a whole new meaning now. | |
We’re Hiring!
CTS is hiring three senior software engineers to develop cutting-edge tools to measure hate online and a director of policy and research
to lead our work managing scholarly research and writing content on tech policy and legislation. If you or someone you know wants to be part of a nimble, mission-driven and high-performing team that loves slow high-fives and running snarky commentary on Zoom calls, apply! | |
Seen on the Internet
CTS loves a good meme and this one is for the ages. | |
Take Action
You too can fight hate—and you can do so from the comfort of home. If you have encountered hateful content on a platform and want to report it, consult our Cyber-Safety Action Guide to see each platform’s hate speech policy and submit a complaint. | |
|
|
|
|