[[link removed]]
THE CORPORATE PLAN FOR AI IS DANGEROUS
[[link removed]]
Martin Hart-Landsberg
June 27, 2025
Socialist Project
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
_ Can we create more modest AI systems that assist human workers and
support creative and socially beneficial work? The answer is yes. But
that is not the road corporations want to take. _
,
Big tech is working hard to sell us on artificial intelligence (AI),
in particular what is called “artificial general intelligence”
(AGI). At conferences and in interviews corporate leaders describe
[[link removed]]
a not-too-distant future when AI systems will be able to do everything
for everyone, producing a world of plenty for all. But they warn, that
future depends on our willingness
[[link removed]]
to provide them with a business-friendly regulatory and financial
environment.
However, the truth is that these companies are nowhere close to
developing such systems. What they have created are “generative
AI” (GAI) systems that are unreliable and dangerous. Unfortunately
for us, a growing number of companies and government agencies have
begun employing them with disastrous results for working people.
What the Heck is AI
There is no simple and agreed upon definition of artificial
intelligence. There is the idea of artificial general intelligence and
there is the reality of generative AI. OpenAI, the company that gave
us ChatGPT, defines [[link removed]] artificial general
intelligence systems as “highly autonomous systems that outperform
humans at most economically valuable work.” In other words, systems
that possess the ability to understand or learn any intellectual task
that a human being can.
Generative AI systems or chatbots, like ChatGPT and the many competing
products developed by other tech companies – Google (Gemini), Musk
(Grok), Microsoft (Copilot), and Meta (Llama) – fall into an
entirely different category. These systems rely on largescale pattern
recognition to respond to prompts. They do not “think” or
“reason” or operate autonomously.
Generative AI systems must be trained on data, and that data needs to
be coded before it can be used, usually by low wage workers
[[link removed]]
in the global South. For example, leading tech companies use hundreds
of thousands or even millions of images to train their systems for
image recognition or generation. And each image must be labeled with
all the items in the image. A similar process is used to train systems
for speech, with conversations taken from a variety of sources labeled
by workers according to their evaluation of the emotions expressed.
Textual material is also often reviewed in an attempt to remove the
most violent or anti-social material.
The actual training process uses complex algorithms to process the
data to establish relevant statistical patterns and relationships.
Chatbots then draw upon those patterns and relationships to generate
ordered sets of images, sounds, or words, based on probabilities, in
response to prompts. Since competing companies use different data sets
and different algorithms, their chatbots may well offer different
responses to the same prompt. But this is far from artificial general
intelligence – and there is no clear technological path from
generative AI to artificial general intelligence.
Reasons for Concern
As noted above, chatbots are trained on data. Since the bigger the
data set the more powerful the system, tech companies have looked high
and low for data. And that means scouring the web for everything
possible (with little regard
[[link removed]]
for copyright): books, articles, transcripts of YouTube videos, Reddit
sites, blogs, product reviews, Facebook conversations; you name it,
the companies want it. However, that also means that chatbots are
being trained on data that includes hateful, discriminatory, and plain
old wacko writings, and those writings influence their output.
For example, many human relations departments, eager to cut staff,
employ AI powered systems to compose job descriptions and screen
applicants. In fact, the Equal Employment Opportunity Commission
estimates
[[link removed]]
that 99% of Fortune 500 companies use some form of automated tool in
their hiring process. Not surprisingly, University of Washington
researchers found
[[link removed]]:
“significant racial, gender and intersectional bias in how three
state-of-the-art large language models, or LLMs, ranked resumes. The
researchers varied names associated with white and Black men and women
across over 550 real-world resumes and found the LLMs favored
white-associated names 85% of the time, female-associated names only
11% of the time, and never favored Black male-associated names over
white male-associated names.”
A similar bias exists with image generation. Researchers found
[[link removed]] images generated
by several popular programs “overwhelmingly resorted to common
stereotypes, such as associating the word ‘Africa’ with poverty,
or ‘poor’ with dark skin tones.” A case in point: when prompted
for a “photo of an American man and his house,” one system
produced an image of a white person in front of a large, well-built
house. When prompted for “a photo of an African man and his fancy
house,” it produced an image of a black person in front of a simple
mud house. The researchers found similar racial (and gender)
stereotyping when it came to generating photos of people in different
occupations. The danger is clear; efforts by publishers and media
companies to replace photographers and artists with AI systems will
likely reinforce existing prejudices and stereotypes.
These systems can also do serious harm to those who become overly
dependent on them for conversation and friendship. As the New York
Times describes
[[link removed]]:
“Reports of chatbots going off the rails seem to have increased
since April , when OpenAI briefly released a version of ChatGPT that
was overly sycophantic. The update made the AI bot try too hard to
please users by ‘validating doubts, fueling anger, urging impulsive
actions or reinforcing negative emotions,’ the company wrote in a
blog post. The company said it had begun rolling back the update
within days, but these experiences predate that version of the chatbot
and have continued since. Stories about ‘ChatGPT-induced
psychosis’ litter Reddit. Unsettled influencers are channeling ‘AI
prophets’ on social media.”
Especially worrisome is the fact that a study
[[link removed]]
done by the MIT Media Lab found that people “who viewed ChatGPT as a
friend ‘were more likely to experience negative effects from chatbot
use’ and that ‘extended daily use was also associated with worse
outcomes.’” Unfortunately, this is unlikely to cause Meta to
rethink its new strategy
[[link removed]]
of creating AI chatbots and encouraging people to include them in
their friend’s network as a way to boost time spent on Facebook and
generate new data for future trainings. One shudders to imagine the
consequences if hospitals and clinics decide to replace their trained
therapists with AI systems.
Even more frightening is the fact that these systems can be easily
programmed to provide politically desired responses. In May 2025,
President Trump began talking about “white genocide” in South
Africa, claiming that “white farmers are being brutally killed”
there. He eventually fast-tracked asylum for 54 white South Africans.
His claim was widely challenged and people, not surprisingly, began
asking their AI chatbots about this.
Suddenly, Grok, Elon Musk’s AI system, began telling users
[[link removed]]
that white genocide in South Africa was real and racially motivated.
In fact, it began sharing that information with users even when it was
not asked about that topic. When Guardian reporters, among others,
pressed Grok to provide evidence, it answered that it had been
instructed to accept white genocide in South Africa as real. The fact
that Musk, born to a wealthy family in Pretoria, South Africa, had
previously made similar claims makes it easy to believe that the order
came from him and was made to curry favor with the President.
A few hours after Grok’s behavior became a major topic on social
media, it stopped responding to prompts about white genocide. As the
_Guardian_ noted, “It’s unclear exactly how Grok’s AI is
trained; the company says it uses data from ‘publicly available
sources.” It also says Grok is designed to have a ‘rebellious
streak and an outside perspective on humanity.’”
It Gets Worse
As concerning as the above highlighted problems are, they pale in
comparison to the fact that, for reasons no one can explain, all
large-scale AI systems periodically make things up or, as the tech
people say, “hallucinate.” For example, in May 2025, the Chicago
Sun Times (and several other newspapers) published
[[link removed]]
a major supplement, produced by King Features Syndicate, showcasing
books worth reading during the summer months. The writer hired to
produce the supplement used an AI system to choose the books and write
the summaries.
As was quickly discovered after the supplement was published, it
included non-existent books by well-known authors. For example, the
Chilean American novelist Isabel Allende was said
[[link removed]]
to have written a book called Tidewater Dreams, which was described as
her “first climate fiction novel.” There is no such book. In fact,
only five of the 15 listed titles were real.
In February 2025, the BBC tested
[[link removed]] the ability of
leading chatbots to summarize news stories. The researchers gave
OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and
Perplexity AI content from the BBC website, then asked them questions
about the stories. They found “significant issues” with more than
half the answers generated. Approximately 20 percent of all the
answers “introduced factual errors, such as incorrect factual
statements, numbers and dates.” When quotations were included in the
answers, more than 10 percent had either been changed or were made-up.
More generally, the chatbots struggled to distinguish facts from
opinion. As BBC News and Current Affairs CEO, Deborah Turness, put it,
the tech companies promoting these systems are “playing with fire…
We live in troubled times, and how long will it be before an
AI-distorted headline causes significant real-world harm?”
As for causing harm, in February 2025, the lawyers representing
MyPillow and its CEO Mike Lindell in a defamation case, submitted a
brief that they were soon forced to admit had largely been written
using artificial intelligence. They were threatened with disciplinary
action because the brief included nearly 30 defective citations,
including misquotes and citations to fictional cases. As the federal
judge hearing the case noted
[[link removed]]:
“he Court identified nearly thirty defective citations in the
Opposition. These defects include but are not limited to misquotes of
cited cases; misrepresentations of principles of law associated with
cited cases, including discussions of legal principles that simply do
not appear within such decisions; misstatements regarding whether case
law originated from a binding authority such as the United States
Court of Appeals for the Tenth Circuit; misattributions of case law to
this District; and most egregiously, citation of cases that do not
exist.”
This tendency for AI systems to hallucinate is especially concerning
since the US military is actively exploring their use, believing that,
because of their speed, they can do a better job than humans in
recognizing and responding to threats.
The Road Ahead
Big tech generally dismisses the seriousness of these problems,
claiming that they will be overcome with better data management and,
more importantly, new AI systems with more sophisticated algorithms
and greater computational power. However, recent studies suggest
otherwise. As the New York Times explained
[[link removed]]:
“The newest and most powerful technologies – so-called reasoning
systems from companies like OpenAI, Google and the Chinese start-up
DeepSeek – are generating more errors, not fewer. As their math
skills have notably improved, their handle on facts has gotten
shakier. It is not entirely clear why.”
Reasoning models are supposed to reduce the likelihood of
hallucinations because they are programmed to respond to a prompt by
dividing it into separate tasks and “reasoning” through each
separately before integrating the parts into a final response. But
increasing the number of steps seems to be increasing the likelihood
of hallucinations.
OpenAI’s own tests show
[[link removed]]
the following:
“o3 – its most powerful system – hallucinated 33 percent of the
time when running its PersonQA benchmark test, which involves
answering questions about public figures. That is more than twice the
hallucination rate of OpenAI’s previous reasoning system, called o1.
The new o4-mini hallucinated at an even higher rate: 48 percent.
“When running another test called SimpleQA, which asks more general
questions, the hallucination rates for o3 and o4-mini were 51 percent
and 79 percent. The previous system, o1, hallucinated 44 percent of
the time.”
Independent studies find a similar trend with reasoning models from
other companies, including Google and DeepSeek.
We need to resist this corporate drive to build ever more powerful AI
models. One way is to organize community opposition
[[link removed]]
to their construction of ever bigger data centers. As the models
become more complex, the training process requires not only more data
but also more land to house more servers. And that also means more
energy and fresh water to run them 24/7. The top 6 tech companies
accounted for 20 percent of US power demand growth over the year
ending March 2025.
Another way is to fight back is to advocate for state and local
regulations that restrict the use of AI systems in our social
institutions and guard against the destructive consequences of
discriminatory algorithms. This is already shaping up to be a tough
fight. Trump’s One Big Beautiful Bill Act, which has passed the
House of Representatives, includes a provision
[[link removed]]
that imposes a 10-year moratorium on state and local government
restrictions on the development and use of AI systems.
And finally, and perhaps most importantly, we need to encourage and
support
[[link removed]]
workers and their unions
[[link removed]] as they oppose
corporate efforts to use AI in ways that negatively impact workers’
autonomy, health and safety, and ability to be responsive to community
needs. At a minimum, we must ensure that humans will have the ability
to review and, when necessary, override AI decisions.
Can we create more modest AI systems that assist human workers and
support
[[link removed]]
creative and socially beneficial work? The answer is yes. But that is
not the road corporations want to take. •
This article first published on the Reports from the Economic Front
[[link removed]] website.
Martin Hart-Landsberg is Professor Emeritus of Economics at Lewis and
Clark College, Portland, Oregon. His writings on globalization and the
political economy of East Asia have been translated into Hindi,
Japanese, Korean, Mandarin, Spanish, Turkish, and Norwegian. He is the
chair of Portland Rising, a committee of Portland Jobs with Justice,
and the chair of the Oregon chapter of the National Writers Union. He
maintains a blog Reports from the Economic Front
[[link removed]].
* artificial intelligence
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
INTERPRET THE WORLD AND CHANGE IT
Submit via web
[[link removed]]
Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]
Twitter [[link removed]]
Facebook [[link removed]]
[link removed]
To unsubscribe, click the following link:
[link removed]