From xxxxxx <[email protected]>
Subject Science Sunday: There’s Only One Way To Control AI -- Nationalization
Date August 21, 2023 6:40 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[AI’s infinite potential — and infinite risk — requires
federal ownership.]
[[link removed]]

SCIENCE SUNDAY: THERE’S ONLY ONE WAY TO CONTROL AI --
NATIONALIZATION  
[[link removed]]


 

Charles Jennings
August 20, 2023
Politico
[[link removed]]


*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

_ AI’s infinite potential — and infinite risk — requires
federal ownership. _

, POLITICO illustration/Photos by iStock.

 

Nine years ago, in a commercial AI lab affiliated with Caltech, I
witnessed something extraordinary.

My colleague Andrej Szenasy was wrapping up a long day’s work
training NeuralEye, an AI initially developed for the Mars Rover
program, and I was a few cubicles away, plowing through NeuralEye’s
test data. “Hey, check this out!” he shouted.

Our lab’s mission was to train NeuralEye to see as humans do, with
the ability to _recognize_ things, not just record them as a camera
does. NeuralEye was built originally to discern different soil types
on Mars, but we were teaching it to identify Earth’s inhabitants:
animals, plants and individual humans. We believed AI could greatly
improve face recognition, so that it could be used in cybersecurity,
replacing passwords.

The first step in teaching NeuralEye to identify people was to get it
to match various photos of a single person’s face. Typically, one
photo would reside in NeuralEye’s training dataset of 14,000 faces;
another — a different photo of the same person — would serve as
the “prompt.” When NeuralEye successfully matched these two photos
out of the thousands in its dataset, it got the digital equivalent of
a doggie treat. In AI, this method is known as reinforcement learning,
and with NeuralEye, it was working.

That night in the lab, for fun, Szenasy had prompted NeuralEye with a
photo of his son, Zachie. Szenasy’s face was in NeuralEye’s
dataset; Zachie’s wasn’t. Zachie, who has Down Syndrome, was a
sweet 8-year-old. Round face, thick glasses, mop of black hair. Dad
was tall and thin, no glasses, blonde with a receding hairline. If
there was a physical resemblance between them, I couldn’t see it.

Szenasy sat me in front of his computer and again prompted NeuralEye
with a photo of Zachie’s face. NeuralEye spun through its cache of
stored faces looking for Zachie —and up popped a photo of Szenasy.
Without any specific instruction, NeuralEye had somehow picked up a
faint family resemblance. Out of those 14,000 faces, it selected
Szenasy’s face as the third closest match with Zachie’s.

The next morning I phoned the AI engineer who’d written
NeuralEye’s algorithm while at the Jet Propulsion Lab, home of the
Mars Rover program. I asked him how NeuralEye could have seen a
connection between Zachie and his father. He waxed philosophical for a
few minutes, and then, when pressed, admitted he had no clue.

That’s the thing about AI: Not even the engineers who build this
stuff know exactly how it works.

This Zachie episode took place in 2014, a time in AI that now seems
prehistoric. Training datasets then had records in the thousands, not
hundreds of millions, and large language models like GPT were just a
gleam in Sam Altman’s eye. Today, AIs are writing novels, passing
the bar exam, piloting warfighter drones. According to a recent
University of Texas study widely reported on cable news, an AI in
Austin is effectively reading minds: After an in-depth CAT-scan and 16
hours of one-on-one training with someone, it can read neural brain
patterns and suggest what the subject is thinking with surprising
accuracy. But in those halcyon AI days nearly a decade ago, we in our
small lab were amazed that NeuralEye could do something as basic as
spot a link between Szenasy and his son.

While the best AI scientists obviously know a great deal about AI,
certain aspects of today’s _thinking machines_ are beyond
anyone’s understanding. Scientists cleverly invented the term
“black box” to describe the core of an AI’s brain, to avoid
having to explain what’s going on inside it. There’s an element of
uncertainty — even _unknowability_ — in AI’s most powerful
applications. This uncertainty grows as AIs get faster, smarter and
more interconnected.

The AI threat is not Hollywood-style killer robots; it’s AIs so
fast, smart and efficient that their behavior becomes dangerously
unpredictable. As I used to tell potential tech investors, “The one
thing we know for certain about AIs is that they will surprise us.”

When an AI pulls a rabbit out of its hat unexpectedly, as NeuralEye
did on a small scale with Zachie, it raises the specter of _runaway
AI — _the notion that AI will move beyond human control. Runaway
AIs could cause sudden changes in power generation, food and water
supply, world financial markets, public health and geopolitics. There
is no end to the damage AIs could do if they were to leap ahead of us
and start making their own arbitrary decisions — perhaps with nudges
from bad actors trying to use AI against us.

Yet AI risk is only half the story. My years of work in AI have
convinced me a huge AI dividend awaits if we can somehow muster the
political will to align AI with humanity’s best interests.

With so much at stake, it’s time we in the United States got serious
about AI policy. We need garden variety federal regulation, sure, but
also new models of AI leadership and governance. And we need to
consider an idea that would have been unthinkable a year ago.

We need to nationalize_ _key parts of AI.

AS ANYONE who’s worked with them knows, AIs make stupid mistakes.
They lack common sense, come up with weird “hallucinations”
(false, random claims) and are prone to “overlearning” — seeing
everything as a nail because they were trained as a hammer.

But AIs also see patterns we don’t. They draw inferences from
Himalayan mountains of data while our brains crawl around in
molehills. Generative AI (notably ChatGPT [[link removed]])
is all the rage, but if you want to better understand how AI evolves
— and appreciate the rise of AI beneath all the current hype —
check out the past decade in AI vision.

Since 2014, image-recognition rates have climbed faster than AI stock
prices. When a computer identifies your face, it’s AI. When
self-driving cars navigate roadways, they “see” with AI. AIs now
read x-rays with greater precision than a radiologist and spot cancer
growths no human doctor can detect: In one clinical trial, AI helped
detect 20 percent more cases of breast cancer
[[link removed]] than
flesh-and-blood radiologists. If you had told me in 2014 that AI
vision would be doing such things within a decade, I’d have
suggested you stop watching so much Spielberg.AI of all kinds is now
advancing on a trajectory similar to AI vision. From agriculture to
education, medicine to transportation, entertainment to finance — AI
is penetrating every nook and cranny of American life. We live in the
era of mass AI _electrification_, except this time the electricity
itself keeps evolving.

There is much about AI we don’t know, but AI experts do agree on one
thing: The pace of AI’s disruption of society will never be this
slow again. Unfortunately, one branch of AI is lagging: the field
known as AI safety.

 

AI safety addresses a wide variety of potential AI risks: accidents,
questions of ethics, cybersecurity, military security, misinformation,
election disruption and more. Despite the efforts of a growing number
of prominent researchers and considerable investment by AI companies,
AI safety proceeds far more slowly than AI itself. It’s a rowboat
chasing a jet ski.

If there is one thing that everyone should know about AIs, it’s
this: They move fast. Microsoft’s ChatGPT app signed up 100+ million
users in about 20 minutes. AIs run on a stack of hardware and
software resources whose processing speed is constantly
accelerating. The datasets used to train AIs are growing and
improving. As a result, AIs today process information in volumes and
at rates no human brain can comprehend. And they are about to get
a potent steroidal injection called quantum computing
[[link removed]],
which will fuel a major new round of AI acceleration.

The rise of AI cannot be attributed solely to better computing
resources. AIs are competent in unsupervised learning — no humans
needed. Their learning curve is like a weird M.C. Escher staircase
that continually goes up. They solve problems in ways that boggle
human experts. They don’t yet have the unique adaptability of the
human mind, nor our signature cultural and social skills. But to think
that AIs will not quickly evolve specialized forms of intelligence far
superior to our own strikes me as incredibly naive.

Still, resistance is not futile. Not yet.

IN 2018, I wrote a book on the new “lightspeed learners,”
[[link removed]] as
I called them: the world’s smartest, fastest-evolving AIs. My
thesis: _AI is going to be huge — and the U.S. needs a new national
AI plan to harness it. _The final chapters of the book presented an
urgent set of AI policy recommendations for America. The book sold
well to libraries and universities, and Rowman & Littlefield, the
publisher, just issued a new 2023 paperback edition. But in terms of
impacting American AI policy, it was a pebble in the ocean.

I then logged three years as a senior fellow at the Atlantic Council,
a venerable D.C. think tank, where I consulted on U.S. AI policy. My
takeaway: To call our current AI policy a can of worms would be an
insult to annelids.

Webster’s should issue a new definition of futility: _attempting to
explain AI to politicians. _I’ve tried. Members of Congress would
conflate AI with social media — and those were the tech-savvy ones.
More than one politician asked me why we couldn’t just unplug
wayward AIs, and a red-state congressperson suggested AI was a fad. He
also insisted that despite testimony given to the House Transportation
Committee, “those pointy-headed SOBs will never back an 18-wheeler
into a loading dock. No way.” To be fair, this was six years ago.

But transparency in AI is overrated. Enact whatever laws you like,
throw tons of money at AI transparency regulation — and we still
won’t have any idea how a specific AI works. I had unfettered access
to every element of the NeuralEye system — algorithm, application
code, training data, test data, 70-page patents, expert analyses —
but to this day I have no concept of how NeuralEye matched Zachie with
his father. I’m sure there’s a logical explanation somewhere in
the cosmos, but the calculus is simply too big for my puny human
brain. I’m all for corporations disclosing data collection and
AI-use practices, but technical AI transparency is a mirage.

The issue of congressionally mandated AI safeguards is more nuanced.
Ideally, each AI would come with guardrails to protect humans against
its potential excesses. But given that no one understands precisely
how AI works, that AIs often surprise us and that AI grows and evolves
at lightspeed — what guardrails could a bickering Congress construct
to protect us? How could its laws and regulations change fast enough
to keep up with AI?

The U.S. is the world’s AI leader, by a lightyear or two. Most of
our AI is controlled by Big Tech: Microsoft/Open AI, Alphabet/Google,
Meta, Amazon, Nvidia, Tesla. Each is a hypercompetitive business with
tremendous resources, including the highest concentration of AI talent
on the planet. These companies have grown rich and powerful by
building tech largely free of U.S. regulatory constraints, in a
marketplace we American citizens constructed for them. They have all
benefited greatly from seven decades of world-class AI research funded
by American taxpayers. Big Tech itself has skin in the AI game.

But so do we.

AI is not the kind of tech that can be invented in a Harvard dorm or a
startup garage in Silicon Valley. Open AI spends _half a
billion _dollars on Nvidia infrastructure for each new AI model it
launches. It has taken years of scientific study, lab research and
application development — not to mention a massive investment of
government dollars — to construct the AI foundation Big Tech now
controls. Big Tech has leveraged this foundation to achieve company
valuations in the trillions. Keep this in mind as I offer a modest
proposal:

We need a new governing body for AI in America — one that could
wield the powers of the state to steer the technology toward a human
mitzvah, rather than a human disaster. Call it the “Humane AI
Commission.”

Luckily, history offers a model.

IN 1947, President Harry Truman yanked control of nuclear weapons away
from the military and handed it to five American civilians — the
newly formed Atomic Energy Commission (AEC). The AEC operated inside
government, but well removed from politics. As Christopher
Nolan’s _Oppenheimer_ made clear, the AEC was not without its
flaws. But it kept the world free of nuclear bombs during the most
dangerous decades of the Cold War. Nolan himself has likened AI to
the nuclear threat in recent interviews
[[link removed]],
while cautioning that AI might be even harder to control.

The AEC model is not a perfect fit for AI — it was too slow and
static, for one thing — but it is instructive. AEC took ownership of
all nuclear reactors, putting it in a position of ultimate control.
The federal government’s role in nationalizing nuclear weapons was
that of owner, not operator — it outsourced most of the work. The
military possessed finished bombs, Westinghouse built and operated
nuclear energy plants, but the AEC controlled the core and had all the
leverage. The AEC also owned and operated the best nuclear research
labs on the planet, including Los Alamos, Oak Ridge and Livermore.
Historically, and legally, the Atomic Energy Commission provides a
useful precedent for when America creates technology that could
potentially end life as we know it — a category into which AI
clearly falls.

The case to nationalize the “nuclear reactors” of AI — the
world’s most advanced AI models — hinges on this question: Who do
we want to control AI’s nuclear codes? Big Tech CEOs answering to a
few billionaire shareholders, or the government of the United States,
answering to its citizens?

Let me be the first to acknowledge that a federal program wresting
control of AI’s “nuclear reactors” from Microsoft, Google, _et
al_., would be a monumental — and painful — undertaking. But all
our other options are worse.

Let’s start with the AI pause option, a position advanced recently
by hundreds of first-class AI experts in a signed open letter
[[link removed]].
Their idea is to halt major AI development temporarily so we can all
take a deep breath. Get our arms around AI, so to speak. The letter
was good theater, little else. If the U.S. were to freeze AI
development (assuming that’s even possible), China would be the main
beneficiary. The Chinese Communist Party has already used AI to spy on
Uyghurs and dissenters, and the Red Army is all-in on AI
[[link removed]].
But China is a perhaps the world leader in AI education, starting with
early age students, and an AI called CityBrain runs all traffic and
emergency response systems in Hangzhou, a city of 8 million. A
US/China treaty on AI could be a major step toward a world with safer,
more humane AI. But as someone who lived in China a few years, I fear
the only thing worse than a world controlled by runaway AI would be a
global AI infrastructure run by President Xi Jinping. (Xi has stated
publicly, more than once, that world AI dominance is one of his
personal goals for China).

Next, we have the let-the-free-market-decide option. To be clear, this
is what propelled America into its position as the world’s AI
leader. What our Big Tech companies have done with AI is astounding to
other nations. But as experts warn of potential societal threats like
runaway AI, allowing Big Tech to operate AI unfettered would be like
Truman entrusting nuclear bombs to Westinghouse.

A third option is regulation of AI by current agencies of the U.S.
government. As a West Coast techie who has worked extensively in D.C.,
my first thought is: Good luck with that. There _are_ practical
federal regulatory actions that should be taken immediately: stronger
AI export controls; new AI development reporting requirements for
corporations; deepfake watermarking rules. But run-of-the-mill federal
regulation is no match for runaway AI, nor the bad actors who will try
to use AI against us. Many types of AI regulation — including the
complex FDA-style approval models often advocated by Big Tech —
would make it much harder for small companies to put AI to work.
Except for very specific “rifle shots,” as they call narrow
regulatory bills in Congress, federal AI regulation won’t work.

What remains is the Truman option — a bold stroke of executive
leadership. Here’s one scenario:

Within the first 100 days after the 2024 inauguration, the president
announces a new, national AI emergency plan. The president explains
that the goal of this plan is global AI leadership for generations to
come. Benevolent, peaceful leadership. Leadership that guides AI’s
rise as a boon to humanity. Leadership that defends the U.S. against
bad actors using AI, and that installs human controls in the DNA of
the most powerful AIs. Yes, that will require the federal government
to take control of certain critical domestic AI resources, just as FDR
temporarily nationalized parts of General Motors, Kaiser Shipyards and
other manufacturing giants to fuel America’s victory in WWII.

The new Humane AI Commission would be run by a diverse team of AI
experts, and strive to be as apolitical as possible. Fortunately, AI
policy in America has not yet been hyper-politicized. Republicans want
a strong U.S. AI policy vis a vis China, and Democrats want racially
unbiased AIs that fight climate change and create new jobs. Both
agendas can be served, without contradiction, by an aggressive,
capable new national AI plan — with the HAIC at the center.

Our best hope is not to suppress AI, but to harness it in ways that
align with humanity’s interests. The only entity on earth with both
the resources and values necessary to harness AI effectively and
humanely is the government of the United States. Managing AI on a
global scale could well be America’s greatest scientific and
diplomatic challenge, ever. The Manhattan Project, cubed.

An undertaking this important should be subject to the democratic
process, flawed though it may be. An HAIC would place the future of AI
— and with it, the future of humanity — into the hands of the
public. But whatever happens, every concerned citizen should learn
more about AI, because we American voters are about to have some
crucial decisions to make. As Bette Davis said in _All About Eve_,
working from a subtle, Oscar-winning script I believe not even a
future AI could write: “Fasten your seat belts. It’s going to be a
bumpy night.”

_Charles Jennings is the former CEO of an AI company partnered with
CalTech/JPL. His 2019 book, Artificial Intelligence: Rise of the
Lightspeed Learners (Rowman &Littlefield), was reissued this year in
a paperback edition._

_POLITICO is the global authority on the intersection of politics,
policy, and power. It is the most robust news operation and
information service in the world specializing in politics and policy,
which informs the most influential audience in the world with insight,
edge, and authority. Founded in 2007, POLITICO has grown to a team of
700 working across North America, more than half of whom are editorial
staff._

__

CHARLES HENRY TURNER’S INSIGHTS INTO ANIMAL BEHAVIOR WERE A CENTURY
AHEAD OF THEIR TIME
[[link removed]]
Researchers are rediscovering the forgotten legacy of a pioneering
Black scientist who conducted trailblazing research on the cognitive
traits of bees, spiders and more
By Alla Katsnelson 
KNOWABLE MAGAZINE
08.02.2023

* artificial intelligence
[[link removed]]
* nationalization
[[link removed]]
* democracy
[[link removed]]
* government
[[link removed]]
* Private Sector
[[link removed]]
* Congress
[[link removed]]
* Computers
[[link removed]]

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]

Manage subscription
[[link removed]]

Visit xxxxxx.org
[[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis

  • Sender: Portside
  • Political Party: n/a
  • Country: United States
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • L-Soft LISTSERV