Last week, the Federal Communications Commission banned robocalls that use voices generated by artificial intelligence (AI). The action followed a series of calls placed in New Hampshire that attempted to dissuade people from voting in the state’s primary—using the replicated voice of President Joe Biden. The incident in New Hampshire is one of the most egregious uses of AI generated deepfakes indirectly influencing an election. Unfortunately, this isn’t new. Deepfake technology has made headlines since at least 2019, when the tools were used to target then-Speaker of the House Nancy Pelosi, Rep. Alexandria Ocasio-Cortez of New York, and former President Donald Trump. AI has become so pervasive, it’s even found its way into comedy sketches. But it’s not all in good fun. Experts are concerned about how AI will impact elections, creating a situation where false information is believed to be true, and true information is believed to be false. The possibilities scream for government regulation, but the challenge is to do it in a way that doesn’t stunt a sector with tremendous potential for human advancement. With the presidential election looming, the clock is ticking. —Melissa Amour, Managing Editor AI’s Threat to DemocracyJust before a recent election in Slovakia, a fake audio recording in which a candidate boasted about rigging the election went viral. The incident raised alarms in the US, where it’s not difficult to envision a similar scenario playing out in an election year. AI is already being used to craft highly targeted and believable phishing campaigns and can also be deployed in cyberattacks that compromise election security. None of these threats are new; we’ve seen similar activity in 2016 and 2020. But advancements in AI since those elections and the widespread availability of the technology can significantly aid bad actors in 2024. The threats to democracy posed by AI are ubiquitous, and the government must implement safeguards against uses of AI that could intentionally deceive the public. The challenge is whether it can restrain the technology effectively without stifling it completely. It’s a tall order. Let’s take a look at where things stand. What You Should KnowProposed AI legislation thus far largely addresses generative AI, and protecting the public from manipulated content as it relates to personal privacy. To that end, a bipartisan group in Congress introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act last month.
But hopes are not high that Congress will pass any AI-related legislation before the election, so states and cities are taking individual regulatory action. As of September, state legislatures had introduced 191 AI-related bills in 2023—a 440% increase over 2022.
The White House’s executive action on AI has been the most comprehensive of all, and the administration has released a blueprint for an AI Bill of Rights as well.
But it’s a fine line. Lawmakers must craft regulation that doesn’t strangle innovation in their zeal to protect democracy, jobs, and humanity itself. In short, we don’t want to lose our competitive edge through over-regulation, like Europe, and allow competitors like China to take the lead. How We Got HereOur hopes and fears for the future of AI are fundamentally shaped by our recent experience with the rise of social media. It is an obvious analogy, similarly new, destabilizing, enduring, and to some extent inevitable. Many lawmakers on both sides of the aisle believe insufficient regulation of social media led to unintended consequences detrimental to democracy and the general well being of individuals.
But on the other hand, the US is the undisputed leader in social media. The largest social media companies are headquartered in the US, creating jobs and generating economic growth. As a result, the US has marginally more control of the sector globally as the chief regulatory body.
Tech innovation is at the forefront of modern American economic success.
Government leaders are determined to generate safer outcomes with AI through greater engagement with tech companies, rather than relying on them to police themselves which has produced, at best, a spotty record. But as our experience with social media shows, regulation can—and should—only go so far, lest it diminish economic opportunity and US influence internationally. What People Are SayingOne thing is certain: AI will change democracy, in the same way that previous technologies, like social media, the internet, and television, did before it. The challenge is how to regulate it to ensure those changes are largely advantageous, without stifling the development of a groundbreaking emergent technology. Some people are worried that proposed regulations could go too far.
Meanwhile others’ fears about AI’s disruptive or even apocalyptic potential are driving calls for stringent measures.
In the end, a balanced approach that minimizes harm but encourages innovation seems ideal. After all, AI could lead to overwhelming advances in human knowledge and ingenuity.
Hey Topline readers, you remember the drill. We want to hear your reactions to today’s stories. We’ll include some of your replies in this space in our next issue of The Topline. Click here to share your take, and don’t forget to include your name and state. We’re looking forward to hearing from you! The Topline is free today. But if you enjoyed this post, you can tell The Topline that their writing is valuable by pledging a future subscription. You won't be charged unless they enable payments. |