View in browser
|
Hello John,
After the dramatic Euro 2020 final, three Black players from the English national soccer team were the targets of racial abuse on Facebook, Instagram and Twitter. It was sadly predictable and yet another example of social media platforms being ill-prepared for events that will draw high levels of hate speech and harassment, especially towards vulnerable people and communities.
Incidents like these underscore the importance of Stop Hate For Profit, the campaign launched a year ago by ADL and other organizations to hold tech companies accountable for the proliferation of hate speech on their platforms. Read on for an update on Stop Hate For Profit and to stay abreast of the Center for Technology and Society’s insights. | |
 |
Stop Hate For Profit: The Year in ReviewBig tech companies have allowed abuse, racism and misinformation to flourish on their social media platforms as shown in our annual survey of online hate and harassment. Users are plunged into a rabbit hole of incendiary, low-quality content amplified by platforms to keep people outraged and hooked—and this is not by accident. Many people face real harm that prevents them from working, going to school or connecting with friends.
Last July, the Stop Hate for Profit coalition formed to demand that Facebook address the widespread hate, racism and misinformation on its platforms (Facebook also owns Instagram and WhatsApp). Stop Hate For Profit drew the support of thousands of businesses that paused spending on Facebook and Instagram advertisements for a month. Pressure generated by the campaign compelled other tech companies not targeted by Stop Hate For Profit to take action.
CTS has been tracking the progress of social media platforms to determine whether they have made much-needed structural changes, especially after a contentious election cycle, the January 6 insurrection and the general insanity we’ve endured over the past 15 months.
Are the platforms doing enough? Read and share our report.
| |
The Perils of Audio Content Moderation: A New Frontier of Awful
Audio-focused platforms like Clubhouse, Discord, Twitch and Twitter Spaces are exploding. Clubhouse grew from 600,000 users to over 10 million in less than six months. Other companies, such as Spotify, have also rolled out their own audio-focused platforms. But with success comes growing pains. Antisemitic incidents plagued Clubhouse, which was unprepared to moderate content from an avalanche of new users (read on for a rabbi’s take on this).
As millions of people migrate to audio-focused apps, how do those platforms moderate content? CTS’ three-part series explores this question. Part one focuses on the implications behind recording audio for Trust and Safety purposes. Part two
discusses how moderators can implement best practices on how to review audio if they decide to record users. Stay tuned for part three, which will be on current approaches and models of audio content moderation that have been publicly announced. More on that in our next newsletter!
We reached out to Clubhouse following the antisemitic incidents and to Clubhouse’s credit, they’ve taken action by finally allowing users to report rooms where abusive speech takes place.
And based on CTS recommendations, they also added the ability for users to report issues based on identity!
We love it when platforms treat user safety and trust as paramount. That’s how social media should work. | |
CTS Recommends
“Clubhouse of Antisemites,” Tablet. Speaking of Clubhouse...the aforementioned rabbi confronted antisemites on the app—only to get kicked off it.
“YouTube’s Recommender AI Still a Horror Show, Finds Major Crowdsourced Study,
” TechCrunch. Our partner Mozilla released the findings of an innovative study that culled data from volunteers in 91 countries who downloaded a browser extension that let them report YouTube videos they “regretted” watching. The extension tracked which videos were recommended to users. Despite YouTube stating that it changed its algorithms to no longer recommend extremist or hateful content, Mozilla’s research found that the Google-owned platform still amplifies misinformation, violent content, hate speech and scams.
Mozilla’s disturbing results lend further credence to CTS’ report published earlier this year, “Exposure to Alternative & Extremist Content on YouTube.”
“Everyone Should Decide How Their Digital Data Are Used—Not Just Tech Companies,” Nature. Smartphones and websites track what we watch, what we buy, and where we go. Three scholars call for data to be owned by public institutions.
“How Twitter Hired Tech's Biggest Critics to Build Ethical AI,” Protocol. A look into a group of boss ladies (and friends of CTS) who comprise Twitter’s Machine Learning, Ethics, Transparency and Accountability (META) team. META is building a system that could prevent abuse beforehand rather than taking action on violative content already seen by victims.
“Inside Facebook’s Data Wars,” The New York Times. How Facebook’s obsession with maintaining its public image undermines its transparency.
“An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang. Our beach reading this summer. | |
We’re Hiring!
CTS is hiring two more senior software engineers to develop cutting-edge tools to measure hate online. If you or someone you know wants to be part of a nimble, mission-driven and high-performing team that loves memes and comedy specials that hold up a terrifying mirror to society, apply! | |
Seen on the Internet
Like half of America, we here at the CTS have been obsessed with the buzzy and appropriately titled “Inside,” the comedy special Bo Burnham filmed during lockdown. The definitive document of the pandemic (we say so anyway), “Inside” is hilariously and darkly relevant to our work— listen to the song “Welcome to the Internet.” No other song encompasses the curiosity and absurdity of CTS’ work in such a hypnotic, catchy way. | |
Take Action
You too can fight hate—and you can do so from the comfort of home. If you have encountered hateful content on a platform and want to report it, consult our Cyber-Safety Action Guide to see each platform’s hate speech policy and submit a complaint. | |
|
|
|
|