No images? Click here Welcome to The Corner. In this issue, we explore the need for regulation mandating the disclosure of increasingly prevalent AI-generated content by major tech platforms.
In April and May, Meta and TikTok announced their intention to start automatically labeling AI- generated content on their platforms. The changes are set to roll out in the coming weeks, though the exact timeline for full adoption remains unclear. The announcements come amid rising concerns that the platforms’ embrace of the artificial intelligence boom will make it even harder for users to tell the difference between real news, authentic speech, and disinformation. Prior to these new guidelines, disclosure policies for AI-generated content were nonexistent or tailored to narrow circumstances of misinformation. Going forward, the two corporations will label content created using leading artificial intelligence tools. Neither corporation, however, provided a clear explanation of how prominent the labeling will be, nor whether they plan to introduce a consistent format. The move by leading platforms to automatically label content is a welcome stopgap. Congressional leaders have been slow walking AI regulation efforts in the United States, despite the urgency of the looming election season. But these policies are no replacement for clear legal standards on marking and disclosing when content is created or manipulated with AI tools. Relying on the platforms to craft their own voluntary disclosure policies may lead to a patchwork of labeling systems that vary in prominence and consistency across each individual platform. That threatens to shift the burden for identifying the origins of AI-generated content onto average users. This is especially true given that several of the leading platform monopolies have invested significant resources in artificial intelligence ventures. Any practices that may limit the public’s acceptance of artificial intelligence-generated content poses a risk to their bottomlines. Since 2020, Meta has labeled video content that was altered to make it appear as if someone said something they did not, regardless of whether those alterations were done manually or via artificial intelligence tools. That policy was adopted following the initial wave of concerns over “deepfakes” meant to distort public discourse. The platform has also labeled photorealistic images created using the corporation’s own AI tools. In February, it announced its intention to expand that policy to images created by other corporations’ tools. Shortly after the announcement of that modest labeling expansion, a decision from Meta’s Oversight Board, which was also issued in February, pushed the platform giant to adopt an AI-labeling policy that also applied to sound and video A few weeks later, TikTok—which is fighting a battle to retain access to markets in democratic nations like the United States due to its spotty history on information integrity practices—followed suit with a policy to automatically label all uploaded videos that utilize AI and mandate users disclose the use of AI tools as part of the upload process. The technology utilized by TikTok and Meta for their new labeling policies is itself the product of a voluntary partnership among leading artificial intelligence companies that allows metadata to be embedded within media created using their tools. But the voluntary nature of this technology limits its practical impact. Many leading developers like OpenAI and Adobe have opted in, but there are workarounds for those seeking to disseminate content created or altered by AI discreetly. They can use tools from corporations that have not adopted the voluntary standards or create new tools that remove this embedded data with no legal penalty. Meta’s own moves in recent months demonstrate the limits of voluntary compliance: after initially claiming that current technology did not allow them to begin automatically labeling video and audio content in February of this year, they changed course a few weeks later following the aforementioned intervention from the Oversight Board. Other tech giants appear more reluctant to embrace AI disclosures. Search giant Google, for instance, has been mostly silent on the matter, and the corporation has yet to announce any meaningful labeling strategy for content made outside of its own proprietary tools, despite its image and web search results becoming increasingly littered with low-quality artificial intelligence-generated content. But American users do not have to settle for a patchwork labeling regime. Instead, regulators can set standards like prominent disclosure requirements, mandating industry-wide adoption of invisible watermarks and other embedded data practices that facilitate the identification of AI tools in the creation policies, and strict penalties for actors who remove that identifying data or create tools with the goal of circumventing embedding requirements. With the introduction of the AI Labeling Act last year, Hawaii Senator Brian Schatz has taken the lead in crafting such a system. His bill would instruct the Federal Trade Commission to create and enforce label policies as part of its mandate to regulate unfair and deceptive business practices — a power the agency is already using to investigate many of the companies creating these tools. Presumably, enough evidence and enforcement actions may eventually empower the FTC to pursue labeling rules through existing rulemaking authorities, as they have in many other instances where emerging technologies require fresh looks at preexisting laws. Content and product labels are standard fare consumer protections. Practically every video advertisement comes with a slew of mandatory disclosures, and agencies spanning from the Department of Agriculture to the Federal Communications Commission have been delegated the authority to regulate disclosure standards on virtually every product or service imaginable. Empowering America’s consumer protection agencies to apply a similar framework to AI content would continue the nation’s strong tradition of enabling users to engage with new types of content in a safe and informed manner.
The Open Markets Institute and The Guardian US will host a conference next month on America’s information crisis and discuss solutions to address it. Speakers will include high-profile policymakers like Senators Elizabeth Warren and Amy Klobuchar, European Competition Commissioner Margrethe Vestager, and Jonathan Kanter, assistant attorney general for antitrust at the Department of Justice. The discussion will focus on ways to bolster the supply of trustworthy journalism, address the problem of tech platforms manipulating and censoring what individuals read, and stop corporations from taking the work of journalists and publishers without compensation. Big Tech’s business model incentivizes disinformation — in some cases, it even boosts calls for violence. Dominant online platforms pose many additional dangers to journalism, free speech, and democracy. This includes blocking U.S. citizens from sharing news with one another, starving publishers of advertising and readers, and simply appropriating news for their own purposes without compensation. And now AI can be deployed to amplify every one of these threats. The timing of our event, to be held at the National Press Club in Washington D.C. on June 27, could not be more critical, with vitally important elections this year in the U.S., UK, Europe, and more than 50 countries around the world. RSVP for the event here.
The Open Markets Institute’s legal director Sandeep Vaheesan provided technical assistance to New York policymakers in drafting a bill to convert Central Hudson Gas & Electric into the democratically governed Hudson Valley Power Authority. Assemblymember Sarahana Shrestha and Senator Michelle Hinchey announced last week they would introduce the bill, which would replace absentee ownership with effective local control and prioritize affordability, reliability, and sustainability over profits. At present, Central Hudson is owned by a Canadian holding company and faces public dissatisfaction over high rates and poor customer service. Vaheesan is an expert on publicly owned power cooperatives, which he has written about in his forthcoming book Democracy in Power: A History of Electrification in the United States, to be published b the University of Chicago Press in December.
OMI Co-Hosts Two More June Events on Private Equity in Childcare and Food Monopolization The Open Markets Institute, National Women’s Law Center, and Community Change will convene policymakers and advocates for an all-day event, “Children Before Profits: Addressing the Risks of Private Equity in the Child Care Industry,” on June 24 to discuss how to protect the child care industry from falling prey to private equity funds. With private equity funds rolling up small providers and building corporate chains in industries from nursing homes to family doctors’ practices, alarm bells are sounding across the care economy about the harm being wrought on families, communities, and care workers. The event will take place at the Eaton Hotel in Washington, D.C. RSVP for the event here. The Open Markets Institute and the filmmakers behind Food, Inc. 2 will host a virtual roundtable discussion, “Too Big to Feed: The Threats of Corporate Consolidation on the Food Supply,” on June 12. Speakers on the panel include Eric Schlosser, producer of Food Inc. 2 and author of Fast Food Nation; Max Miller, attorney advisor to Federal Trade Commissioner Alvaro Bedoya; and Claire Kelloway, food program manager for the Open Markets Institute. The discussion will focus on the monopolization of our food system and what policymakers can do to break them up. More details on registration will follow. 📝 WHAT WE'VE BEEN UP TO:
🔊 ANTI-MONOPOLY RISING:
We appreciate your readership. Please consider making a contribution to support the continued publication of this newsletter. 📈 VITAL STAT:50%The share of the local television market that would be captured if Paramount is successfully bought by private equity group Apollo Global Management and Sony, which have offered an all-cash bid of $26 billion for the Hollywood studio. The deal would run afoul of the Federal Communication Commission’s 39% cap on the local television market. Paramount’s 28 local television stations already account for 39% of U.S. households, while Apollo reaches 11% of households through its stake in cable company Cox Media Group. (New York Post) 📚 WHAT WE'RE READING:The Wolves of K Street: Wall Street Journal investigative reporter Brody Mullins and Washingtonian magazine senior writer Luke Mullins pull back the curtain on the powerhouse lobbying firms that dominate Washington. By telling the stories of how three lobbying empires rose to prominence, the authors demonstrate the sweeping influence large corporations have on American policymaking. 🔎 TIPS? COMMENTS? SUGGESTIONS? We would love to hear from you—just reply to this e-mail and drop us a line. Give us your feedback, alert us to competition policy news, or let us know your favorite story from this issue. |