We explore the everyone-for-themselves, DIY era of regulating tech and moderating content.
View in browser ([link removed] )
March 19, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here ([link removed] ) .
The DIY era of regulating tech
Have we entered the everyone-for-themselves era of regulating tech and moderating content?
How do you make tech safe when it’s moving faster than government regulation? Even for the public, it’s hard to keep up with the pace of technological development.
Today, the answer in the US might be DIY.
In the absence of federal, enforceable laws governing artificial intelligence and social media content moderation in the US, states are taking it upon themselves, individuals are engaging in their own personalized content moderation system, school districts are rolling out their own policies, and tech companies are introducing new features.
This week we explore how regulation and content moderation are working in the liminal space when the harms of technology are well-documented, but the speed of that technology is outpacing our ability to reign it in.
// Growing concern
From the discrimination of algorithms, to AI-generated deepfakes and disinformation, to content on social media platforms that's harmful for children’s mental health, there is a growing consciousness worldwide about the problems caused by today’s technology.
- Last week, a report issued by the US State Department was published ([link removed] ) , with the conclusion that the most advanced AI systems could “pose an extinction-level threat to the human species.”
- Last month, Project Liberty Foundation released research ([link removed] ) finding that the majority of adults globally believe that social media companies bear “a great deal” of responsibility for making the internet safe.
- Last year, a poll done by Project Liberty Alliance member Issue One found that 87% of the US electorate ([link removed] ) want government action to combat the harms being caused by social media platforms.
//
In the absence of comprehensive laws and sound enforcement, there’s a patchwork of solutions emerging at every level.
//
// Tech is fast, passing laws is slow
Lawmakers are beginning to take action.
- Last week, the European Union adopted ([link removed] ) the world’s first set of laws to broadly regulate AI.
- In the US, no federal law to regulate AI or safely moderate social media content exists, but there have been Congressional hearings on social media harms ([link removed] ) , the US House passed a bill last week that would ban TikTok ([link removed] ) , and the Supreme Court has been forced to weigh in on prickly cases ([link removed] ) balancing safety online with free speech rights.
While the US lags behind Europe in comprehensive regulations around tech (the EU has been the world’s leader in passing laws ([link removed] ) to regulate big tech for years), Europe’s speed in passing laws has not translated into ease of enforcing them ([link removed] ) .
// The era of DIY regulation
In the absence of comprehensive laws and sound enforcement, there’s a patchwork of solutions emerging at every level.
- States: Filling the void left by inaction at the US federal level, US states are taking action. Nearly 200 bills were introduced ([link removed] ) in local state legislatures in 2023 aimed at regulating AI (only 12 of which became law), and this year states across the US will debate over 400 AI-related bills. To limit the harms caused by social media, US states have taken a variety of approaches, leading to a lack of consistency and a patchwork of directives, according to a report by Brookings ([link removed] ) last year.
- Companies: Pressured by lawmakers ([link removed] ) and whistleblowers ([link removed] ) to self-regulate their own platforms, tech companies are launching internal initiatives ([link removed] ) with trust & safety at the center, conducting their own audits ([link removed] ) , and issuing voluntary commitments ([link removed] ) . Bluesky, the X alternative, launched Ozone last week ([link removed] ) , a tool that lets users create and run their own independent moderation services.
- Schools: School districts are taking matters into their own hands in the face of greater awareness about the harms of social media to students ([link removed] ) . Schools across the US are banning phones ([link removed] ) in classrooms. New York Public Schools, the largest public school system in the US, has issued social media guidelines for students ([link removed] ) and for staff ([link removed] ) .
- Individuals: Individuals are leaving big tech platforms for shared server co-ops ([link removed] ) and “the fediverse ([link removed] ) ,” the decentralized network of social media alternatives. Teens are giving advice to fellow teens ([link removed] ) , creating processes to fact-check information online ([link removed] ) , and launching organizations like Log Off ([link removed] ) , a youth-led organization committed to helping young people build healthy relationships with social media and online platforms.
- Research & Civil Society: Last year, the Stanford Internet Observatory ([link removed] ) launched a pilot program to support researchers studying issues of online trust and safety. Dozens of organizations in Project Liberty’s Alliance are doing everything from educating parents ([link removed] ) to fighting disinformation ([link removed] ) to protecting elections ([link removed] ) to archiving the internet ([link removed] ) .
// From DIY to self-governance?
From legislatures to dinner tables across the US and around the world, the amount of activity and momentum is beginning to compound.
Can the decentralized efforts to regulate tech and safely moderate its content translate into new governance models?
New books and research outline a path to a more democratized and self-governed internet.
- University of Colorado professor Nathan Schneider released Governable Spaces: Democratic Design for Online Life ([link removed] ) , a roadmap for how to build democratic governance online.
- Northwestern Law professor Paul Gowder released his book, The Networked Leviathan: For Democratic Platforms ([link removed] ) , which argues that tech platforms like Facebook and X should be governed as democracies ([link removed] ) .
- Project Liberty Founder Frank McCourt released OUR BIGGEST FIGHT ([link removed] ) , his book on how we can transition to a web anchored in data privacy and data ownership.
Is it inevitable that the DIY era of regulating tech will translate into new laws, new norms, and new beliefs about the role of technology in our lives? Time will tell, but we’re optimistic.
Project Liberty in the news
Last week, Project Liberty Founder, Frank McCourt, released his first book: OUR BIGGEST FIGHT ([link removed] ) . In support of the book, he spoke with a variety of media outlets:
- America cannot get ‘blinded’ by TikTok and needs to look at the ‘bigger picture’: Frank McCourt provided analysis of the US’s latest efforts to ban TikTok. Mornings with Maria on Fox Business ([link removed] ) .
- 'Our personhood is now owned by someone else': How to reclaim dignity in the digital age, according to Frank McCourt from Project Liberty. MSNBC ([link removed] ) .
- Podcast: Frank McCourt joined Jennifer Strong from MIT Technology Review's podcast SHIFT to discuss his new book in front of a live audience. SHIFT ([link removed] ) .
Other notable headlines
// 📱 An article in the Atlantic ([link removed] ) argued we need to end the phone-based childhood. The environment in which kids grow up today is hostile to human development.
// 🏛 This week, the US Supreme Court is hearing a case on how the government communicates with social media companies, according to an article in The Verge ([link removed] ) .
// 🤔 Social media’s unregulated evolution over the past decade holds lessons that apply directly to AI companies and technologies, according to an article in MIT Technology Review ([link removed] ) .
// 🚰 As the amount of available content grows with the use of AI, social media’s role as curator will become even more important. An article in The Atlantic ([link removed] ) proposed three solutions.
// 🦺 An article in Tech Policy Press ([link removed] ) identified strategies to reduce the harms from synthetic media.
// 🚫 An article in The Wall Street Journal ([link removed] ) highlighted how researchers are warning against data poisoning. By tampering with the data used to train AI models, hackers can spread misinformation and steal data.
// 🚸 An article in The Washington Post ([link removed] ) highlighted research from Pew, which found that almost half of teenagers think their parents get distracted by their phones.
// 🚚 An article in The Financial Times ([link removed] ) featured a story about how an Uber Eats delivery driver was sick of the algorithms that controlled his day, so he decided to fight back.
// 📹 An article in The New York Times ([link removed] ) explored why, in the face of a potential ban, the sale of TikTok would not be easy.
Partner news & opportunities
// Mothers Against Media Addiction rally
March 22nd at 10:30am ET
Mothers Against Media Addiction (MAMA) ([link removed] ) is hosting a rally and press conference in New York City in support of putting kids before big tech and pushing legislative efforts forward. Sign up here ([link removed] ) .
// Virtual event on deepfakes and synthetic media
March 27th at 1:00pm ET
All Tech is Human ([link removed] ) is hosting a virtual discussion with leaders on how deepfakes and synthetic media will impact society. Register here ([link removed] ) .
// Virtual event: AI and 2024 Global Elections
March 28th at 1:30pm ET
The Institute of Global Politics ([link removed] ) at Columbia University’s School of International and Public Affairs and Aspen Digital ([link removed] ) are hosting an afternoon of discussions examining how AI has already played a role in the elections this year and what it means for the elections ahead in 2024. Register here ([link removed] ) .
What did you think of today's newsletter?
We'd love to hear what you thought of today's newsletter. Reply to this email with:
- Feedback for how we can make this newsletter better
- Ideas for future editions
- A recommendation of someone we should interview
/ Project Liberty Foundation is advancing responsible development of the internet, designed and governed for the common good. /
Thank you for reading.
Facebook ([link removed] )
LinkedIn ([link removed] )
X Logo (formerly Twitter) ([link removed] )
Instagram ([link removed] )
PLslashes_logo_green ([link removed] )
501 W 30th Street, Suite 40A,
New York, New York, 10001
Unsubscribe ([link removed] ) Manage Preferences ([link removed] )
© 2023 Project Liberty