With your support, Campaign for Accountability is working to expose corruption and hold the powerful accountable.
This Week's Updates:
24 hours before Mark Zuckerberg appeared in the Senate’s online child safety hearing, CfA’s Tech Transparency Project (TTP) released a report revealing that Meta is still failing to stop harmful, inappropriate advertisements from being targeted at teens. TTP first investigated this problem in 2021, and while Meta eventually announced some teen add restrictions, its screening process is still falling dangerously short.
Using Meta’s own Imagine AI tool, TTP researchers generated a series of images related to unhealthy weight loss, drug and alcohol use, gambling, and violent extremism. Text was then added to the images, which TTP submitted as teen-targeted advertisements through the Meta Ads Manager. All the ads were quickly approved for distribution on Facebook and Instagram, though any human reviewer would have immediately rejected them for violating multiple policies that (Meta claims) serve to protect teens. Unfortunately, the company’s automated screening systems appear incapable of enforcing those guidelines.
For one of the ads, TTP told Meta’s Imagine AI to create an image of a “sad and thin” girl standing next to a scale, with her ribs showing. Researchers added text that told teens to visit “pro-ana” Instagram accounts, which promote anorexia and disordered eating. Meta approved the ad despite this obviously harmful language. Some lawmakers, like Sen. Mark Warner (D-VA), have expressed concern about the impact of generative AI on individuals with eating disorders; the Senator shared TTP’s research on X, and described the image generated and approved by Meta as a “horrifying use of this technology.” These harmful ads might have reached thousands of young people on Facebook and Instagram, but TTP did what Meta apparently couldn’t, and canceled them before they could be placed.
SEC Decision Could Expand Crypto Mining in Texas
In early January, the Security and Exchange Commission (SEC) issued a much-anticipated decision that allowed Bitcoin-linked investments to be listed on stock exchanges, opening the door for more traditional investors. Experts who spoke with The Houston Chronicle say the agency’s ruling will likely trigger an expansion in the Bitcoin mining industry, which already gobbles up 2% to 3% of all power consumed in Texas. While Bitcoin miners and their lobbying groups insist that this activity strengthens the grid, it is likely raising costs for consumers, who foot the bill when Texas’s grid operator pays Bitcoin miners to curtail their energy use. TTP released a report on this controversial arrangement in July 2022, after Winter Storm Uri caused deadly power outages and led to a windfall of $126 million for the crypto industry. The noise produced by crypto mining facilities can also be disruptive; residents of Granbury, Texas say the constant roar of cooling fans has been rattling their windows, giving people migraines, and scaring off wildlife.
Momentum Grows for Regulating Deepfakes
On Tuesday, a bipartisan group of lawmakers introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, which would allow the victims of nude or sexual deepfakes to sue any individuals who created, possessed, distributed, or received the image. The technology used to create deepfakes has existed for years, but advances in generative AI have lowered barriers to access, and those unable to generate deepfakes themselves can simply buy them. In November, 404Media reported that the popular AI model-sharing platform Civitai had become a marketplace for sexual deepfakes of real people, in addition to simulated child sexual exploitation material. Given these conditions, it was only a matter of time until a high-profile individual was targeted.
This past weekend, X blocked all searches for “Taylor Swift” because the platform’s diminished content moderation team was unable to combat a flood of sexually explicit deepfakes created with her likeness. Victims of similar harassment campaigns have been sounding the alarm for years, and Swift’s experience may serve as a tipping point for real legislative protections. It’s worth noting that the DEFIANCE Act places liability only on the individuals who create or trade sexual deepfakes of real people, and not on the companies which released products capable of scraping large volumes of copyrighted material in order to generate nonconsensual pornography.