With your support, Campaign for Accountability is working to expose corruption and hold the powerful accountable.
This Week's Updates:
Michigan Passes Political Deepfake Law
Last week, Michigan Governor Gretchen Whitmer signed a package of bills which criminalize the use of deceptive political deepfakes and require disclosures when artificial intelligence is used to generate a campaign advertisement – a measure which some tech industry groups have characterized as unnecessary or even unconstitutional. Some advertising platforms, including Google and Meta, have alreadyannounced that they will require disclosures for AI-generated political advertisements. Given both companies’ trackrecords on advertising policy enforcement, though, it may be short-sighted for lawmakers to depend on them to develop their own norms. CfA made the same point in a recent letter to the Federal Election Commission, which called on the agency to create rules holding campaigns accountable for “fraudulent misrepresentation” achieved with AI. With the U.S. political advertising market now projected to reach $16 billion, it’s particularly important for regulators to step in place guardrails on this new technology.
The lack of federal deepfake legislation combines dangerously with a recent proposed rule from the Federal Election Commission, which has indicated that it will not require social media influencers to disclose when a campaign has paid them to make posts supporting a candidate. Under the agency’s suggested rules, Americans might open Instagram and see that a popular influencer has posted what appears to be a video of a candidate making outlandish claims or behaving strangely. Behind the scenes, the video could have been generated by an opposing political campaign, which then paid the influencer to post it. It would be left to Meta to identify and flag the video as a deepfake, and to enforce its own disclosure guidelines.
TTP Report Cited in New Mexico’s Latest Meta Lawsuit
This week, a complaint filed by the New Mexico Attorney General’s office demonstrated how Meta is allowing adults on its platforms to harass and sexually exploit children, despite its promises to address the issue. Previous reports in The Wall Street Journal have made it clear that Meta’s safeguards are failing to stop pedophiles and even encouraging their engagement with child sexual abuse material, but New Mexico investigators took this work a step further by creating Facebook and Instagram accounts that appeared to be operated by vulnerable children. When registering, they entered ages below Meta’s cutoff, and were at first denied the ability to create an account. After multiple unsuccessful attempts to register, they changed their birthdates and created accounts with the same identifying information – like any child would be able to do. Meta did not flag this behavior as suspicious, and allowed them to sign up as “adults.”
One of the test accounts belonged to a fictional 13-year-old called “Issa Bee.” Investigators also made an account for Issa’s mother, “Cereceres,” who indicated in public posts that she was willing to traffic her daughter. By the time the Attorney General’s office had concluded its experiment, the accounts had been inundated with sexually explicit messages and comments from adults. When this behavior was reported to Instagram or Facebook by “Issa,” the platforms failed to act, or automatically decided that the content hadn’t violated their policies. The state’s complaint went on to cite a report from CfA’s Tech Transparency Project (TTP) concerning Meta’s failure to detect child sexual abuse on its platforms; in the majority of cases, federal law enforcement had to rely on tip-offs from the public or use leads from other investigations. Based on these findings, New Mexico’s Attorney General concluded that Meta had allowed Facebook and Instagram to become “a marketplace for predators,” which lured children in with addictive algorithms and then failed to keep them safe. The full complaint, which contains detailed information about the test accounts, can be found here.
TTP’s Academic Influence Report Tracks Meta Spending
On Wednesday, TTP released a new database to track donations made to academic institutions by both Meta and the Chan Zuckerberg Initiative (CZI), which is controlled by Meta CEO Mark Zuckerberg and his wife, Priscilla Chan. TTP also released a report outlining the context for several notable donations, and found that Meta’s giving was generally tied to its own products, while CZI’s spanned an array of subjects including K-12 education, artificial intelligence, and biomedical research. This project was developed in collaboration with the Real Facebook Oversight Board, and was published just two days after disinformation researcher Joan Donovan accused Harvard University of pulling support for her work in deference to a $500 million donation from CZI. In her whistleblower disclosure, Donovan alleged that senior leaders from the Harvard Kennedy School began restricting her research and told her that, as a staff member, she lacked academic freedom. In January 2023, Donovan was barred from holding public events or participating in activities that would “raise her profile.”
If Donovan's assertions about Harvard's deference toward Meta are borne out, it wouldn't be the first time that a powerful corporate interest has successfully exerted influence over academic research. The Tobacco Industry Research Council (TIRC), for instance, was founded in the 1950s as a grantmaking organization to fund research into the relationship between cigarette smoking and cancer. In reality, tobacco companies used the TIRC to boost scholarship challenging the emerging scientific consensus on smoking’s risks. Decades later, Harvard medical historian Allan M. Brandt would describe the organization as “one of the most intensive efforts by an industry to derail independent science in modern history.” While the truth eventually came out, the TIRC bought the tobacco industry a few more years of plausible deniability, and can be seen as a model for academic capture.