With your support, Campaign for Accountability is working to expose corruption and hold the powerful accountable.
This Week's Updates:
TTP and ADL Report: Profiting from Hate
On Wednesday, CfA’s Tech Transparency Project (TTP) released a joint report with the Anti-Defamation League (ADL) which examined whether four major social media platforms –YouTube, X (formerly Twitter), Facebook, and Instagram– are potentially profiting from ad placements alongside searches for hate groups and extremists. Using a list of hate groups from ADL’s Glossary of Extremism, researchers conducted searches on each platform and recorded any advertisements that were placed in the search results. Of all the platforms examined, YouTube proved to be the most problematic; not only were ads for brands like McDonald’s, Microsoft, and Disney served alongside music videos for white supremacist bands, YouTube itself appears to have generated those videos as part of an automated process to create more content for its platform. X also placed ads alongside searches for hate groups, but for a smaller percentage of searches than YouTube. Meta-owned Facebook and Instagram, on the other hand, only served ads in a handful of searches, demonstrating that it is possible for tech platforms to address this issue.
Yelp Braces for Crisis Pregnancy Center Fight in Texas
Last week, Texas Attorney General Ken Paxton notified Yelp that the state would attempt to punish it for attaching disclaimers to crisis pregnancy centers (CPCs), which do not provide abortions and often attempt to mimic legitimate clinics. The company began flagging CPCs after the fall of Roe v. Wade, which its complaint characterizes as part of a broader effort to “provide additional information to help mitigate the potential for deception.” Paxton’s lawsuit, which was filed yesterday, accuses Yelp of violating Texas’ Deceptive Trade Practices Act with an earlier disclaimer, which alerted users that CPCs “typically provide limited medical services and may not have licensed medical professionals onsite.” Paxton objected to the wording of that statement, which Yelp then changed to read: “This is a Crisis Pregnancy Center. Crisis Pregnancy Centers do not offer abortions or referrals to abortion providers." Despite Yelp’s updated disclaimer, which Paxton has acknowledged is “accurate,” Texas is still seeking to hold Yelp liable for the original wording.
Yet, it is completely accurate to say that the medical services CPCs provide are “limited,” and that CPCs have a long track record of disguising themselves as real abortion clinics in order to deceive patients. When state governments support this deceptive behavior, it becomes even harder for women to seek comprehensive care. In 2020, CfA urged Pennsylvania officials to pull state funding from an organization called Real Alternatives, which channeled money to CPCs that were forbidden to even discuss contraception with patients. Pennsylvania Gov. Josh Shapiro (D) eventually terminated the contract with Real Alternatives this year, cutting it off from taxpayer dollars. Texas, on the other hand, allocated $100 million to CPCs for 2022 and 2023.
Tech Talking Points Emerge at Senate Hearing on AI and Elections
This week, the Senate Rules Committee held a hearing titled AI and the Future of Our Elections, which gave lawmakers the chance to discuss potential regulations for AI in political speech. In her opening remarks, Sen. Amy Klobuchar (D-MN) suggested a three-pronged approach that would combine FEC enforcement, mandated disclaimers, and outright bans for certain AI tools in election-related communications. Along with Sen. Josh Hawley (R-MO), Klobuchar has introduced a bill amending the FEC Act to prohibit materially-deceptive AI media relating to federal candidates, with exceptions for parody and satire. For some witnesses at the hearing, though, that bill went too far: Ari Cohn, Free Speech Counsel for the tech-funded organization TechFreedom, argued that AI generated media could serve to “characterize a candidate’s position” or “highlight differences between two candidates’ beliefs.” Cohn then suggested that Congress fight misleading AI by bolstering “digital literacy” – the same refrain adopted by social media companies when pressed about their role in spreading misinformation.
Industry-aligned witnesses also claimed that AI was already being used to create political ads, which, they said, would make the addition of disclaimers entirely meaningless. Neil Chilson, a researcher with the right-leaning Center for Growth and Opportunity at Utah State University, argued that nearly any advertisement could be labeled “AI generated” because media capturing and editing software currently makes extensive use of AI features. An iPhone photo taken of a candidate, for instance, might technically be considered AI generated because of the automatic “enhancements” applied by the device’s software. Sen. Klobuchar acknowledged these concerns and said she would address the issue of over-broad disclosures, rather than abandon the regulatory effort entirely.