From AI Safety Team <[email protected]>
Subject Ban “super” AI until its safe
Date October 22, 2025 12:12 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
 



"Superintelligent" AI could be disastrous.

More than 800 top experts, faith leaders and public figures just released
a statement calling for a ban on AI superintelligence until it's safe.

Let’s build on that now.

[ [link removed] ]Sign the Statement on Superintelligence.


[ [link removed] ] Add your name 

   
John,

Reckless tech billionaires are racing to build AI ‘superintelligence’ -
raising massive ethical and security concerns.

From controlling nuclear weapons, to replacing millions of jobs, to
creating deadly viruses - unfettered AI development could have
catastrophic consequences for us all.

And it’s keeping top experts and scientists awake at night. 800 Nobel
Laureates, CEOs, faith leaders and public figures have put their names
behind a powerful new call to ban this advanced AI until it’s safe: the
Statement on Superintelligence.

That’s the foundation. Now it’s up to us to build a people powered
campaign so big that our governments have to respond.

[ [link removed] ]Sign the Statement on Superintelligence, and help make this so big that
governments cannot ignore.

AI experts believe that superintelligence could be less than ten years
away, and they warn that we do not know how to control it. That’s why
hundreds of leading public figures are calling for AI tools to be
developed securely and for those tools to be targeted at solving specific
problems in areas like health and education.

Recent polling shows that three‑quarters of U.S. adults want strong
regulations on AI development, preferring oversight akin to
pharmaceuticals rather than the tech industry’s "self‑regulation." And
almost two-thirds (64%) feel that superhuman AI should not be developed
until it is proven safe and controllable, or should never be developed.

Big Tech lobbyists say that a moratorium could give rogue actors or states
an advantage. But that argument underestimates the catastrophic potential
for all of humanity as advanced AI is developed – regardless what country
it’s ultimately made in, or if we ever really achieve “AI
superintelligence”.

Among the 400+ initial signers of the Statement on Superintelligence are
retired military leaders and security advisors, journalists and academics,
policy-makers, priests and CEOs. Let’s add our names too, and show
governments and Big Tech that it’s time to act.

[ [link removed] ]Add your name to the Statement on Superintelligence.

Just in the last month, Amazon’s Jeff Bezos and OpenAI’s Sam Altman have
admitted that there’s an AI investment bubble. As the bubble threatens to
burst, the pressure on AI companies to cut corners, cover-up mistakes, and
ignore warnings, is only going to increase.

That’s why we need to speak out NOW.



[ [link removed] ] Add your name 



Thanks for all that you do,
Eoin, Vicky and the team at Ekō


More information:

[ [link removed] ]The U.S. Public Wants Regulation (or Prohibition) of Expert‑Level and
Superhuman AI
Future of Life Institute 19 October 2025
[ [link removed] ]Artificial Intelligence: Arguments for Catastrophic Risk
Philosophy Compass 10 February 2024
[ [link removed] ]If Anyone Builds it, Everyone Dies review – how AI could kill us all
The Guardian 22 September 2025
[ [link removed] ]Harry and Meghan join AI pioneers in call for ban on superintelligent
systems
The Guardian 22 October 2025

 

 

Ekō is a worldwide movement of people like you, working together to hold corporations accountable for their actions and forge a new, sustainable path for our global economy.

Please help keep Ekō strong by chipping in $3. [link removed]
Screenshot of the email generated on import

Message Analysis

  • Sender: Ekō
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • ActionKit