From Martin Mawyer from Patriot Majority Report <[email protected]>
Subject Human Cells in AI? Is Artificial Intelligence Going DEFCON Frankenstein
Date April 15, 2026 10:01 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

“Human cells in AI.”
That’s all most Americans will need to hear.
It sounds like something out of a science fiction nightmare. And for a public already uneasy about artificial intelligence, it raises a simple question: how far is this going to go?
Now, to be fair, what scientists are actually building is more limited than that phrase suggests. These are lab-grown human cells [ [link removed] ]used in controlled environments, functioning more like biological circuits than anything resembling a human mind.
But here’s the problem: that distinction may not matter at all.
Because this isn’t just about what AI is. It’s about how fast it’s being pushed and how little time people are given to understand it before the next boundary is crossed.
If this were happening in a vacuum, it might be different. But it’s not.
A recent Gallup poll [ [link removed] ] shows that Americans are still deeply divided over artificial intelligence, even as it becomes more common in the workplace.
Only about 3 in 10 workers use AI frequently on the job. Roughly half use it rarely or not at all, even when it’s available to them.
And why? Not because they don’t have access. Because they don’t trust it.
Nearly half of non-users say they simply prefer to work the way they always have. Others cite ethical concerns, data privacy risks, or a belief that AI won’t actually help them.
At the same time, fear is rising. The percentage of workers who believe their job could disappear within five years due to AI has climbed from 15% to 18% in just one year, and it’s even higher among those already working with the technology.
In other words, Americans aren’t just unsure about AI. They’re uneasy, skeptical, and in many cases, actively resisting it.
And this is the environment into which we’re now introducing phrases like “human cells in AI.”
You can explain the science. You can clarify that these are lab-grown cells that function more like biological circuits than anything resembling a human mind.
But that’s not how most people will process it.
What they will hear is something much simpler:
A line has been crossed.
And once that line is crossed, the next question comes just as quickly:
What’s next?
If human cells can be integrated into AI systems today, what prevents more complex biological components from being integrated into AI systems tomorrow? More advanced neural structures? More direct integration between biology and machines?
Scientists may say those scenarios are distant, speculative, or even unrealistic. But to the public, that distinction often doesn’t matter.
Because from their perspective, the pattern is already familiar:
Yesterday, this wasn’t even on the table.
Today, it’s being tested in labs.
Tomorrow?
That’s exactly what people are afraid to find out.
And let’s be honest about what people are actually afraid of.
AI is getting closer to becoming human.
That may not be what scientists intend. It may not even be technically accurate.
But it is absolutely how it will be perceived. And perception matters.
Most people aren't going to read a detailed explanation of lab-grown cells, controlled environments, or biological circuitry. They’re not going to study the nuances or follow the caveats.
They’re going to hear three words…
“human cells in AI”
…and draw their own conclusion.
And in a moment when trust in artificial intelligence is already fragile, that conclusion is unlikely to be charitable.
Here’s what makes all of this even more concerning.
The AI industry already understands that public opinion matters. That’s why it is spending millions of dollars lobbying Congress, state officials, governors, and even the White House, working to shape how this technology is regulated and deployed.
But while those efforts are focused on influencing policymakers, far less attention is being given to something just as important:
The people who are expected to live with it.
Because right now, those people are uneasy. They are skeptical. And in many cases, they are already resisting what AI is becoming.
And instead of slowing down, explaining more clearly, and addressing those concerns head-on, the industry continues to introduce new developments that sound, to the average person, like another line has just been crossed.
“Human cells in AI” is not just a technical milestone.
It’s a message.
And the message many Americans will hear is simple:
This is moving faster than we can understand… and no one is asking us if we’re ready.
If the goal is to build trust, this is not how you do it.
You don’t build trust by adding new fears before people have worked through the old ones.
You don’t calm concerns by dismissing them or explaining them away after the fact.
And you certainly don’t earn confidence by pushing forward while the public is still trying to catch its breath.
Because if this continues, the issue won’t just be what artificial intelligence becomes.
It will be how strongly people push back against it.
Martin Mawyer is the founder of the Digital Intelligence Project and the President of Christian Action Network. He is the host of the “Shout Out Patriots” podcast, and author of When Evil Stops Hiding [ [link removed] ]. For more action alerts, cultural commentary, and real-world campaigns defending faith, family, and freedom, subscribe to Patriot Majority Report [ [link removed] ].

Unsubscribe [link removed]?
Screenshot of the email generated on import

Message Analysis