From xxxxxx <[email protected]>
Subject OpenAI Is Now Everything It Promised Not To Be: Corporate, Closed-Source, and For-Profit
Date March 2, 2023 2:40 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[ OpenAI is today unrecognizable, with multi-billion-dollar deals
and corporate partnerships. Will it seek to own its shiny AI future?]
[[link removed]]

OPENAI IS NOW EVERYTHING IT PROMISED NOT TO BE: CORPORATE,
CLOSED-SOURCE, AND FOR-PROFIT  
[[link removed]]


 

Chloe Xiang
February 28, 2023
Vice
[[link removed]]


*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

_ OpenAI is today unrecognizable, with multi-billion-dollar deals and
corporate partnerships. Will it seek to own its shiny AI future? _

Miocrosoft has invested $1 billion in OpenAI,

 

OpenAI is at the center of a chatbot arms race
[[link removed]],
with the public release of ChatGPT and a multi-billion-dollar
Microsoft partnership spurring Google and Amazon to rush to implement
AI in products. OpenAI has also partnered with Bain
[[link removed]] to
bring machine learning to Coca-Cola's operations, with plans to expand
to other corporate partners. 

There's no question that OpenAI's generative AI is now big business.
It wasn't always planned to be this way.

OpenAI Sam CEO Altman published a blog post
[[link removed]] last Friday
titled “Planning for AGI and beyond.” In this post, he declared
that his company’s Artificial General Intelligence
(AGI)—human-level machine intelligence that is not close to existing
and many doubt ever will—will benefit all of humanity and “has the
potential to give everyone incredible new capabilities.” Altman uses
broad, idealistic language to argue that AI development should never
be stopped and that the “future of humanity should be determined by
humanity,” referring to his own company. 

This blog post and OpenAI's recent actions—all happening at the peak
of the ChatGPT hype cycle—is a reminder of how much OpenAI's tone
and mission have changed from its founding, when it was exclusively a
nonprofit. While the firm has always looked toward a future where AGI
exists, it was founded on commitments including not seeking profits
and even freely sharing code it develops, which today are nowhere to
be seen. 

OpenAI was founded in 2015 as a nonprofit research organization by
Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman,
among other tech leaders. In its founding statement
[[link removed]], the company declared
its commitment to research “to advance digital intelligence in the
way that is most likely to benefit humanity as a whole, unconstrained
by a need to generate financial return.” The blog stated that
“since our research is free from financial obligations, we can
better focus on a positive human impact,” and that all researchers
would be encouraged to share "papers, blog posts, or code, and our
patents (if any) will be shared with the world." 

Now, eight years later, we are faced with a company that is neither
transparent nor driven by positive human impact, but instead, as many
critics including co-founder Musk have argued, is powered by speed
and profit
[[link removed]].
And this company is unleashing technology that, while flawed, is still
poised to increase some elements of workplace automation at the
expense of human employees. Google, for example, has highlighted the
efficiency gains from AI that autocompletes code, as it lays off
thousands of workers. 

When OpenAI first began, it was envisioned as doing basic AI research
in an open way, with undetermined ends. Co-founder Greg Bockman
told 
[[link removed]]_The
New Yorker
[[link removed]]_,
“Our goal right now…is to do the best thing there is to do. It’s
a little vague.” This resulted in a shift in direction in 2018 when
the company looked to capital resources for some direction. “Our
primary fiduciary duty is to humanity. We anticipate needing to
marshal substantial resources to fulfill our mission,” the company
wrote in an updated charter in 2018 [[link removed]]. 

By March 2019, OpenAI shed its non-profit status and set up a
“capped profit” sector [[link removed]], in
which the company could now receive investments and would provide
investors with profit capped at 100 times their investment. The
company’s decision was likely a result of its desire to compete
with Big Tech rivals like Google
[[link removed]] and
ended up receiving a $1 billion investment
[[link removed]] shortly after from Microsoft. In
the blog post announcing the formation of a for-profit company, OpenAI
continued to use the same language we see today, declaring its mission
to “ensure that artificial general intelligence (AGI) benefits all
of humanity.” As Motherboard wrote
[[link removed]] when
the news was first announced, it’s incredibly difficult to believe
that venture capitalists can save humanity when their main goal is
profit.

The company faced backlash during its announcement and subsequent
release of its GPT-2 language model in 2019. At first, the company
said it wouldn’t be releasing the training model
[[link removed]]'s source code due to
“concerns about malicious applications of the technology.” While
this in part reflected the company's commitment to developing
beneficial AI, it was also not very "open." Critics wondered why the
company would announce a tool only to withhold it, deeming it a
publicity stunt. Three months later, the company released the
model on the open-source coding platform GitHub
[[link removed]], saying that this action was “a
key foundation of responsible publication in AI, particularly in the
context of powerful generative models.” 

According to investigative reporter Karen Hao, who spent three days
at the company
[[link removed]] in
2020, OpenAI's internal culture began to reflect less on the careful,
research-driven AI development process, and more on getting ahead,
leading to accusations of fueling the “AI hype cycle.” Employees
were now being instructed to keep quiet about their work and embody
the new company charter. 

“There is a misalignment between what the company publicly espouses
and how it operates behind closed doors. Over time, it has allowed a
fierce competitiveness and mounting pressure for ever more funding to
erode its founding ideals of transparency, openness, and
collaboration,” Hao wrote.

To OpenAI, though, the GPT-2 rollout was a success and a
stepping-stone toward where the company is now. “I think that is
definitely part of the success-story framing," Miles Brundage, the
current Head of Policy Research, said during a meeting discussing
GPT-2, Hao reported. "The lead of this section should be: We did an
ambitious thing, now some people are replicating it, and here are some
reasons why it was beneficial.”

Since then, OpenAI appears to have kept the hype part of the GPT-2
release formula, but nixed the openness. GPT-3 was launched in
2020 and was quickly "exclusively" licensed to Microsoft
[[link removed]].
GPT-3's source code has still not been released even as the company
now looks toward GPT-4. The model is only accessible to the public via
ChatGPT with an API, and OpenAI launched a paid tier to guarantee
access to the model. 

There are a few stated reasons why OpenAI did this. The first is
money. The firm stated in its API announcement blog
[[link removed]], "commercializing the technology
helps us pay for our ongoing AI research, safety, and policy efforts."
The second reason is a stated bias toward helping large companies. It
is "hard for anyone except larger companies to benefit from the
underlying technology," OpenAI stated. Finally, the company claims it
is safer to release via an API instead of open-source because the firm
can respond to cases of misuse. 

Altman’s AGI blog post on Friday continues OpenAI’s pattern of
striking a sunny tone, even as it strays further from its founding
principles. Many researchers criticized the lack of criticality and
substance in the blog post, including failing to define AGI
concretely. 

“Y'all keep telling us AGI is around the corner but can't even have
a single consistent definition of it on your own damn
website,” tweeted Timnit Gebru,
[[link removed]] a
computer scientist who was fired from Google for publishing a
groundbreaking paper
[[link removed]] about the risks of
large language models, which includes its dangerous biases and the
potential to deceive people with them. 

Emily M. Bender, a professor of linguistics at the University of
Washington and the co-author of that paper, tweeted
[[link removed]]:
“They don't want to address actual problems in the actual world
(which would require ceding power). They want to believe themselves
gods who can not only create a ‘superintelligence’ but have the
beneficence to do so in a way that is ‘aligned’ with
humanity.” 

The blog post comes at a time when people are becoming more and more
disillusioned with the progress of chatbots like ChatGPT; even Altman
has cautioned that today's models are not suited to doing anything
important. It's still questionable whether human-level AGI will ever
exist, but what if OpenAI succeeds at developing it? It's worth asking
a few questions here:

Will this AI be shared responsibly, developed openly, and without a
profit motive, as the company originally envisioned? Or will it be
rolled out hastily, with numerous unsettling flaws, and for a big
payday benefitting OpenAI primarily? Will OpenAI keep its sci-fi
future closed-source?

Microsoft's OpenAI-powered Bing chatbot has been going off the rails
[[link removed]],
lying and berating users, and spreading misinformation. OpenAI also
cannot reliably detect its own chatbot-generated text
[[link removed]],
despite the increasing concern from educators about students using the
app to cheat. People have been easily jailbreaking the language model
[[link removed]] to
disregard the guardrails OpenAI set around it, and the bot breaks
[[link removed]] when
fed random words and phrases. Nobody can say why, exactly, because
OpenAI has not shared the underlying model's code, and, to some
extent, OpenAI itself is unlikely to fully understand how it works
[[link removed]]. 

With all of this in mind, we should all carefully consider whether
OpenAI deserves the trust it's asking for the public to give. 

OpenAI did not respond to a request for comment.

_Chloe Xiang is a writer, photographer, multidisciplinary artist, and
founding Editor-in-Chief of Keke Magazine._

* artificial intelligence
[[link removed]]
* ChatGBT
[[link removed]]
* corporate profits
[[link removed]]

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]

Manage subscription
[[link removed]]

Visit xxxxxx.org
[[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis

  • Sender: Portside
  • Political Party: n/a
  • Country: United States
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • L-Soft LISTSERV