[[link removed]]
SUNDAY SCIENCE: A.I. PIONEERS CALL FOR PROTECTIONS AGAINST
‘CATASTROPHIC RISKS’
[[link removed]]
Meaghan Tobin
September 16, 2024
New York Times
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
_ Scientists from the United States, China and other nations called
for an international authority to oversee artificial intelligence. _
From left: Stuart Russell, Andrew Yao, Yoshua Bengio and Ya-Qin Zhang
are among the influential A.I. scientists who called for international
collaboration to prevent serious risks posed by the technology.,
Massimo Pistore
Scientists who helped pioneer artificial intelligence
[[link removed]] are
warning that countries must create a global system of oversight to
check the potentially grave risks posed by the fast-developing
technology.
The release of ChatGPT and a string of similar services that can
create text and images on command have shown how A.I. is advancing in
powerful ways. The race to commercialize the technology has quickly
brought it from the fringes of science to smartphones, cars and
classrooms, and governments from Washington to Beijing have been
forced to figure out how to regulate and harness it.
In a statement on Monday, a group of influential A.I. scientists
raised concerns that the technology they helped build could cause
serious harm. They warned that A.I. technology could, within a matter
of years, overtake the capabilities of its makers and that “loss of
human control or malicious use of these A.I. systems could lead to
catastrophic outcomes for all of humanity.”
If A.I. systems anywhere in the world were to develop these abilities
today, there is no plan for how to rein them in, said Gillian
Hadfield, a legal scholar and professor of computer science and
government at Johns Hopkins University.
“If we had some sort of catastrophe six months from now, if we do
detect there are models that are starting to autonomously
self-improve, who are you going to call?” Dr. Hadfield said.
Geoffrey Hinton, a pioneering scientist who spent a decade at Google,
joined the signatories remotely. Chloe Ellingson for The New York
Times
On Sept. 5-8, Dr. Hadfield joined scientists from around the world in
Venice to talk about such a plan. It was the third meeting of the
International Dialogues on A.I. Safety, organized by the Safe AI
Forum, a project of a nonprofit research group in the United States
called Far.AI [[link removed]].
Governments need to know what is going on at the research labs and
companies working on A.I. systems in their countries, the group said
in its statement. And they need a way to communicate about potential
risks that does not require companies or researchers to share
proprietary information with competitors.
The group proposed that countries set up A.I. safety authorities to
register the A.I. systems within their borders. Those authorities
would then work together to agree on a set of red lines and warning
signs, such as if an A.I. system could copy itself or intentionally
deceive its creators. This would all be coordinated by an
international body.
Scientists from the United States, China, Britain, Singapore, Canada
and elsewhere signed the statement.
Among the signatories was Yoshua Bengio, whose work is so often cited
that he is called one of the godfathers of the field. There was Andrew
Yao, whose course at Tsinghua University in Beijing has minted the
founders of many of China’s top tech companies. Geoffrey Hinton
[[link removed]],
a pioneering scientist who spent a decade at Google, participated
remotely. All three are winners of the Turing Award
[[link removed]],
the equivalent of the Nobel Prize for computing.
The group also included scientists from several of China’s leading
A.I. research institutions, some of which are state-funded and advise
the government. A few former government officials joined, including Fu
Ying, who had been a Chinese foreign ministry official and diplomat,
and Mary Robinson, the former president of Ireland. Earlier this year,
the group met in Beijing, where they briefed senior Chinese government
officials on their discussion.
Their latest gathering in Venice took place at a building owned by the
billionaire philanthropist Nicolas Berggruen. The president of the
Berggruen Institute think tank, Dawn Nakagawa, participated in the
meeting and signed the statement released on Monday.
The meetings are a rare venue for engagement between Chinese and
Western scientists at a time when the United States and China are
locked in a tense competition for technological primacy.
In recent months, Chinese companies have unveiled technology that
rivals
[[link removed]] the
leading American A.I. systems.
Last October, President Biden signed an executive order that required
companies to report to the federal government about the risks that
their A.I. systems could pose. Doug Mills/The New York Times
Government officials in both China and the United States have made
artificial intelligence a priority in the past year. In July,
a Chinese Communist Party conclave
[[link removed]] that
takes place every five years called for a system to regulate A.I.
safety. Last week, an influential technical standards group in
China published an A.I. safety framework
[[link removed]].
Last October, President Biden signed an executive order
[[link removed]] that
required companies to report to the federal government about the risks
that their A.I. systems could pose, like their ability to create
weapons of mass destruction or potential to be used by terrorists.
President Biden and China’s leader, Xi Jinping, agreed when they met
last year that officials from both countries should hold talks on A.I.
safety. The first took place in Geneva
[[link removed]] in
May.
In a broader government initiative, representatives from 28
countries signed a declaration
[[link removed]] in
Britain last November, agreeing to cooperate on evaluating the risks
of artificial intelligence. They met again in Seoul in May. But these
gatherings have stopped short of setting specific policy goals.
Distrust between the United States and China adds to the difficulty of
achieving alignment.
“Both countries are hugely suspicious of each other’s
intentions,” said Matt Sheehan, a fellow at the Carnegie Endowment
for International Peace, who was not part of the dialogue.
“They’re worried that if they pump the brakes because of safety
concerns, that will allow the other to zoom ahead,” Mr. Sheehan
said. “That suspicion is just going to be baked in.”
The scientists who met in Venice this month said their conversations
were important because scientific exchange is shrinking amid the
competition between the two geopolitical superpowers.
In an interview, Dr. Bengio, one of the founding members of the group,
cited talks between American and Soviet scientists
[[link removed]] at
the height of the Cold War that helped bring about coordination to
avert nuclear catastrophe. In both cases, the scientists involved felt
an obligation to help close the Pandora’s box opened by their
research.
Technology is changing so quickly that is difficult for individual
companies and governments to decide how to approach it, and
collaboration is crucial, said Fu Hongyu, the director of A.I.
governance at Alibaba’s research institute, AliResearch, who did not
participate in the dialogue.
“It’s not like regulating a mature technology,” Mr. Fu said.
“Nobody knows what the future of A.I. looks like.”
RELATED:
Federal AI legislation An Evaluation of Existing Proposals and a Road
Map Forward [[link removed]]
Report • By Patrick Oakford, Josh Bivens, and Celine McNicholas
Economic Policy Institute
September 25, 2024
_Meaghan Tobin [[link removed]] covers
business and tech stories in Asia with a focus on China and is based
in Taipei_
__
Stem Cells Reverse Woman’s Diabetes — a World First
[[link removed]]
By Smriti Mallapaty
Nature Magazine
She is the first person with type 1 diabetes to receive this kind of
transplant.
September 26, 2024
* Science
[[link removed]]
* computing
[[link removed]]
* artificial intelligence
[[link removed]]
* regulation
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
INTERPRET THE WORLD AND CHANGE IT
Submit via web
[[link removed]]
Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]
Twitter [[link removed]]
Facebook [[link removed]]
[link removed]
To unsubscribe, click the following link:
[link removed]