riponsociety

Memo to Washington: AI Needs Your Full Attention ... Now!

 

by PETER STONE

“Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.”

 

So begins the 2016 report of the One Hundred Year Study on Artificial Intelligence that I led.  It is particularly important to understand from this definition that AI is not any single thing, but rather a collection of many different technologies.  Specifically, GPT-4, the most recent and most powerful generative AI model released by OpenAI, is one of many existing AI-based systems, each with different capabilities, strengths, and weaknesses.

 

The report continues:

 

“Unlike in the movies, there is no race of superhuman robots on the  horizon….  And while the potential to abuse AI technologies must be acknowledged and addressed, their greater potential is, among other things, to make driving safer, help children learn, and extend and enhance people’s lives.”

 

Though much has changed in the seven years since this report was released, I still stand by these words.  If you’ve spent any time interacting with ChatGPT (and if you haven’t, you must!), I suspect that you’ve been very impressed with its capabilities.  It, and other similar systems, are able to generate text and images that are amazingly realistic.  But even so, they do not come close to fully replicating human intelligence, let alone surpassing it.  And as such, there is little risk that they will soon get out of control and pose an imminent “existential” threat to humankind — at least not anywhere near to the degree that nuclear weapons and pandemics already do.

 

The development and deployment of this one specific type of AI technology - large generative models such as GPT4 - is outpacing our ability to understand their strengths and limitations.

 

Nonetheless, I, along with many of my colleagues, recently signed an open letter that called for a public and verifiable pause by all developers of “AI systems more powerful than GPT-4.”

 

While I did not draft the letter myself, and do not believe that such a global, verifiable pause is remotely realistic, I signed in order to call attention to the potential for bad actors (i.e., human beings) to abuse AI technologies and to urge that efforts to understand and control the “imminent” threats be accelerated.

 

In my opinion, the development and deployment of this one specific type of AI technology — large generative models such as GPT4 — is outpacing our ability to understand their strengths and limitations.  A flurry of innovation is still uncovering how they can be used (and misused), and to understand their likely social, economic, and political impacts.  We are all readjusting to a world in which realistic-sounding text, and realistic-looking images and videos, may have been created by a machine.  This upends long-held assumptions about our world, calling into question deeply ingrained notions such as “seeing is believing.”

 

As a result, we need to speed up and increase investments in research into understanding current models and how they can be constrained without losing their abilities.  To be clear, I do not at all advocate slowing down technological progress on AI.  The opportunity costs are too great.  But I implore everyone in a position to do so to urgently speed up societal responses, or “guardrails.”

 

As a result, we need to speed up and increase investments in research into understanding current models and how they can be constrained without losing their abilities.

 

One way to appreciate the urgency is to reflect back on the rollout of other disruptive technologies.  The Model T Ford was introduced in 1908, and it took more than 50 years to get to the point of 100 million automobiles in the world.  During those decades, we, as a society, gradually built up the infrastructure to support them and make them (relatively) safe, including road networks, parking structures, insurance, seat belts, air bags, traffic signals, and all sorts of other regulations and traffic laws.  Today, most people would agree that the benefits of automobiles outweigh their (not insignificant) risks and harms.  The same goes for things like electricity (which has caused many fires), airplanes (which have been used as lethal weapons), and many other technologies that have gradually become ubiquitous and shaped modern society.

 

ChatGPT reached 100 million users in a matter of weeks, rather than years or decades.  If the pace of the release of new, more powerful LLMs (Large Language Models) are going to continue as it has (or even accelerate), then we urgently need to speed up efforts to understand their implications (both good and bad) and craft appropriate, measured responses.

 

An essential role for universities in this effort is to rapidly increase the size of the AI-literate workforce, not only to satisfy demand of private industry, but more importantly to help infuse governments and policy bodies with people trained in the details of AI.  I help lead multiple efforts at The University of Texas at Austin towards this end, including the Computer Science department’s new online Masters in AI and a university-wide interdisciplinary grand challenge project on defining, evaluating, and building “Good Systems.”

 

But these efforts are only one piece of the puzzle.  I thus call on everyone in a position to influence our society’s response to this challenging moment in technological progress to engage deeply and help make sure we get it right. Policymakers need to take all actions possible to minimize the chances that the harms will outweigh the benefits for any group of individuals and for society as a whole; and of course we always keep our eyes open towards any developments that could lead to loss of control.

 

But most importantly, we need to fully support progress in development and understanding of AI technologies that have the potential to improve our nation and the world in so many ways!

 

Dr. Peter Stone holds the Truchard Foundation Chair in Computer Science at the University of Texas at Austin. He is Associate Chair of the Computer Science Department, as well as Director of Texas Robotics.

 

--###--

 

The Ripon Forum is published six times a year by The Ripon Society, a public policy organization that was founded in 1962 and takes its name from the town where the Republican Party was born in 1854 – Ripon, Wisconsin. One of the main goals of The Ripon Society is to promote the ideas and principles that have made America great and contributed to the GOP’s success. These ideas include keeping our nation secure, keeping taxes low and having a federal government that is smaller, smarter and more accountable to the people. 

Facebook
Twitter
Instagram
YouTube
This message was sent to [email protected] by [email protected]
1155 15th Street, NWSuite 550, Washington, DC, xxxxxx


Unsubscribe from all mailings Unsubscribe | Manage Subscription