View this email in your browser

CfA's August 18, 2023 Newsletter

With your support, Campaign for Accountability is working to expose corruption and hold the powerful accountable.

This Week's Updates: 

New Studies from TTP and ADL Shed Light on Algorithm-Driven Antisemitism  
Yesterday, CfA’s Tech Transparency Project (TTP) and the Anti-Defamation League (ADL) released a pair of studies that revealed how mainstream social media platforms were amplifying antisemitism through content recommendation algorithms, search features, and auto-generated pages. As platforms like Facebook, Instagram, Twitter, and YouTube shifted from chronological feeds to individualized recommendations, a user’s activity and preferences have come to shape the type of content they are served. Using test accounts with an established interest in conspiracy theories, TTP and ADL’s first study found that Instagram, Facebook, and Twitter began recommending extremely antisemitic content; Instagram, in particular, pushed even more egregious posts to an account registered as a teen. 
 
For the second study, researchers tested the auto-complete search functions of several social media platforms, as well as Google, to explore the pervasiveness of hate groups. Facebook, Instagram, and YouTube were all found to be hosting extremist content, which every platform made easier to find by using search-autocomplete features. In their drive to engage users, platforms like YouTube and Facebook even auto-generated pages or channels for hate groups – an issue that TTP has previously covered. Meanwhile, Google created “knowledge panels” in searches for hate groups, some of which linked directly to their official pages or promotional events featuring “recruiters.” Of all the platforms featured in the study, YouTube did the best job of ignoring a user’s interest in antisemitic content and avoided recommending hateful videos. In the context of YouTube’s success, it is especially disturbing that Meta allowed Instagram to recommend the most extreme, antisemitic content to a teenage test user. 
A Possible Crypto Walk-Back in Arkansas 
Two weeks ago, CfA’s newsletter highlighted “right to mine” crypto bills, which have been signed into law in Arkansas and Montana and limit the ability of local governments to regulate crypto mining operations – notably, both bills were based on model legislation created by a lobbying firm called the Satoshi Action Fund, which is led by a pair of former Trump Administration officials. In Arkansas, the law has been criticized by residents and lawmakers who feel its passage was rushed. Some towns have managed to quickly pass ordinances limiting the decibel level or placement of crypto mining operations, before the law comes into effect. Now, Arkansas Sen. Bryan King (R) is attempting to repeal the legislation, though his bill cannot be considered until 2025 without a special session of the General Assembly. 
 
“We’ve discovered that the crypto facility proposed in Arkansas would use enough power for 7,000 or 8,000 homes, while only employing two or three people,” King told Government Technology. “A lot of times whenever these facilities are placed and energy usage goes up, it’s basic economics that citizens’ energy rates go up as well.”
Boogaloo Movement Returns to Facebook (Again) 
On Thursday, Vice News published a story highlighting the Boogaloo movement’s presence on Facebook, which has been growing steadily despite Meta’s promise to address it. Drawing on data and insights provided by TTP, Vice’s Tess Owen explained how violent, anti-government movements like the Boogaloo Bois have been able to avoid detection by using coded terms and a network of backup pages. As Meta’s “year of efficiency” eats into its trust and safety teams, platforms like Facebook are gradually returning to a more unmoderated state, where extremist organizations can fly under the radar by changing their lingo. In September of 2022, TTP noted that Boogaloo activity was beginning to surge on Facebook, as groups and meme pages gathered thousands of members. While Meta CEO Mark Zuckerberg has previously extolled AI as a content moderation solution, extremist organizations are simply able to move faster and invent new signifiers for their ideology, which AI systems can't pick up on. For the Boogaloo Bois, then, Facebook’s disinvestment in human trust and safety workers has been a gift. 
What We're Reading
Dems again press Supreme Court for ethics council after ‘insufficient’ initial response
Wildfire evacuees frustrated by Facebook news ban in Canada
George Santos has violated a federal ethics law — again — by failing to file his annual financial disclosure on time

Follow Our Work:


We thank you for your continued support.  Without people like you, our work would not be possible.

Here is how you can stay involved and help us accomplish our mission:
  1. Follow CfA on Twitter.
  2. Follow the Tech Transparency Project on Twitter.
  3. Tell your friends and colleagues about CfA. 
  4. Send us a tip
  5. Make a tax-deductible donation.
Be on the lookout for more updates about our work in the upcoming weeks. Thanks again for signing up to be a part of CfA!  
 
Sincerely, 

Michelle Kuppersmith
Executive Director, Campaign for Accountability
Twitter
Website
Copyright © 2023 Campaign for Accountability, All rights reserved.
You signed up for this list at campaignforaccountability.org

Our mailing address is:
Campaign for Accountability
611 Pennsylvania Ave SE
#337
Washington, District Of Columbia 20003

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp