riponsociety

The Role of Congress in Regulating Artificial Intelligence

 

by JAY OBERNOLTE

Over the next several years, Congress will grapple with critical questions concerning the regulation of new and emerging technologies. While technology’s impact on our country has brought about monumental changes, it has previously done so in a largely unregulated environment. Now, as America reflects on the technological revolution that brought us the iPhone and the internet, and looks forward towards a second revolution in artificial intelligence, it is an opportune time to address some of the changes and challenges of the past 20 years and set our sights on putting our nation and our society on a path towards a successful transition into the next era of humanity’s coexistence with technology.

 

I am proud to be a part of America’s thriving technology community myself. I earned a Bachelor’s degree in computer engineering at the California Institute of Technology (Caltech) and a Master’s in artificial intelligence at UCLA before using my programming skills to start a video game development company, which I’ve managed for the last 30 years. Today, I am one of only four computer scientists in the U.S. House of Representatives.

 

As Congress begins to tackle the issue of artificial intelligence, it is vital that we maintain the agility of our technology and strike a careful balance between protecting consumers and protecting innovation. 

 

When I was doing research in AI 30 years ago, the discipline was at the beginning of a renaissance. The field of artificial intelligence had its genesis in 1950 through Alan Turing’s paper on computing machinery and intelligence, followed by the introduction five years later of Allen Newell, Cliff Shaw, and Herbert Simon’s Logic Theorist program. By the 1980s, the growing accessibility of computers, increased computing power, and the expansion of the algorithmic toolkit enabled AI development to reignite. Neural networks and machine learning algorithms were developed that allow computers to learn through experience in the same way that human brains do. The introduction of expert systems was the “low-tech” precursor to AI chatbots such as ChatGPT.

 

Today, many of the principals of AI that were employed during my graduate school days remain key building blocks of ongoing research and development. However, the significant increases in available computing power since then have revolutionized the capabilities of machine learning algorithms and the tasks they can accomplish.

 

Part of the brilliance of America’s technology industry over these many years is that it has been allowed to flourish in a largely unregulated environment. This has given our nation the flexibility to remain agile and on the cutting edge of modern innovation, without the interference of burdensome regulations that could have at many stages shut the industry down for good. It has catalyzed our leadership in the field over countries in Europe, Great Britain, and most of Asia.

 

Today, however, as we stand on what seems to be a precipice where AI has reached a stage of development that could radically alter human society, calls for regulation in the United States and across the globe have reached a fever pitch from both government and academia. As Congress begins to tackle the issue of artificial intelligence, it is vital that we maintain the agility of our technology and strike a careful balance between protecting consumers and protecting innovation.

 

It is important to remember that AI is fundamentally still just software, even at its advanced stage of development. Despite how remarkably human it may seem, these systems can still only do what we tell them to do, and they shouldn’t be viewed any differently than the apps we have on our phones and the programs we run on our computers. ChatGPT is an amazing example of what AI can do. It can be an effective educational tool, a skilled coding assistant, and an efficient resource to complete many tasks. However, the technology is still far from perfected, and there is much we are still learning every day about how well it really works and how it might be misused.

 

Rather than an army of robots with red laser eyes rising up to take over the world, the real risks of AI include hazards such as deep fakes, unlimited government suveillance, and manipulation of public opinion by malign foreign actors. 

 

While AI poses serious risks, it is not the scary monster of science fiction movies. Rather than an army of robots with red laser eyes rising up to take over the world, the real risks of AI include hazards such as deep fakes, unlimited government surveillance, and manipulation of public opinion by malign foreign actors. Also, like any technological revolution, AI will undoubtedly cause major societal and economic shifts in the way we work and the way we spend our free time on a scale we haven’t experienced since the industrial revolution.

 

We are also likely to experience significant social upheaval as a result of AI. When the industrial revolution created automated textile mills, high-quality cloth became widely available to the middle-class for the first time in human history. It changed the way the average household operated, but it also put the mill workers previously employed to make fabric by hand out of work. The Luddites in England burned those textile mills in protest of the new technology, but viewed through the lens of history, it is indisputable that workers, families and communities across the world greatly benefited from the new technology, despite the disruption it engendered.

 

The advent of artificial intelligence tools that can be used in many workplace settings will likely catalyze a similar shift in our workforce and in our society. However, this time entry- and mid-level white collar jobs will be the likely subject of technological displacement. People will write wills using online AI software instead of by visiting a lawyer’s office, and healthcare outcomes will be improved by automated AI screenings that can detect harmful tumors earlier and with greater accuracy than the human eye. Although the internet may have revolutionized our access to information, AI could someday magnify that access tenfold.

 

These are all possibilities that any governing body seeking to regulate AI must take into careful consideration. In Europe, legislators in the EU have moved too aggressively, abdicating their decision making on AI to a bureaucracy and passing regulations on data privacy that stifle innovation by focusing on mechanisms instead of on outcomes. Some countries in Europe have gone even further, arbitrarily halting the development of artificial intelligence within their borders and allowing the rest of the world to progress while they lag behind. This tactic would be particularly harmful in the United States, where China is already seeking to use artificial intelligence to manipulate public opinion, attack our cyber defenses, and profile American citizens.

In Europe, legislators in the EU have moved too aggressively, abdicating their decision making on AI to a bureaucracy and passing regulations on data privacy that stifle innovation.

 

The United States must instill the values of freedom and entrepreneurship in the use of artificial intelligence as this technology becomes commonplace globally, instead of allowing other countries to propagate their alternative values of limited personal autonomy over data, an Orwellian surveillance state, government control of technology, and the anti-democratization of knowledge.

 

Over the next several months, the House Energy and Commerce Committee will continue its development of major new federal data privacy legislation that will take an important first step towards mitigating many of the near-term risk factors posed by artificial intelligence. These policies will enact critically important protections that, if implemented correctly, will shield the personal digital data that could fuel malicious AI by putting reasonable guardrails around its use by industry, and equally importantly, by government.

 

Congress will play a critical role in addressing the vast and sweeping changes artificial intelligence will bring to the world in which we live. We must strike an appropriate balance between safeguarding against the dangers of AI while simultaneously enabling its ethical development and deployment. If we accomplish this, the potential benefits of AI promise nothing less than an explosion of human productivity and the realization of long-held human goals such as universal education and the eradication of poverty. Guarding against the risks while still enabling human society to reap the benefits of AI will be the work of our generation.

 

Jay Obernolte represent s the 23rd District of  California  in the U.S. House of Representatives.  An entrepreneur and  computer scientist who has spent the past 30 years running his own video game development company, he is the only member of Congress with a graduate degree in Artificial Intelligence. 


--###--

 

The Ripon Forum is published six times a year by The Ripon Society, a public policy organization that was founded in 1962 and takes its name from the town where the Republican Party was born in 1854 – Ripon, Wisconsin. One of the main goals of The Ripon Society is to promote the ideas and principles that have made America great and contributed to the GOP’s success. These ideas include keeping our nation secure, keeping taxes low and having a federal government that is smaller, smarter and more accountable to the people. 

Facebook
Twitter
This message was sent to [email protected] by [email protected]
1155 15th Street, NWSuite 550, Washington, DC, xxxxxx


Unsubscribe from all mailings Unsubscribe | Manage Subscription