How To Prevent AI Psychosis
Let's talk about Seemingly Conscious AI (SCAI) and the very real danger it poses
Here for AI news? Scroll to the very bottom for recent AI headlines you should know about.
What is SCAI?
What is Seemingly Conscious AI (SCAI)? In short, it’s an AI system that fools people into thinking that it’s conscious.
From Mustafa Suleyman’s excellent essay:
“It shares certain aspects of the idea of a “philosophical zombie” (a technical term!), one that simulates all the characteristics of consciousness but internally it is blank… it would imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.
This is not far away. Such a system can be built with technologies that exist today along with some that will mature over the next 2-3 years. No expensive bespoke pretraining is required. Everything can be done with large model API access, natural language prompting, basic tool use, and regular code.”
I participated last decade’s AI revolution as a researcher, as an engineer, and as a data scientist: a triple inoculation against taking the question of AI consciousness seriously.
But it doesn’t matter how intellectually bankrupt the topic seems and how much it makes me want to roll my eyes with my entire head. Convincing mimicry of personhood is enough to make some people will fall for it. Hard.
I’m not scared of seemingly conscious AI.
I’m scared of people who believe in conscious AI.
Our janky evolutionary hardware is primed for exploitation. Most of these cases will be harmless, but not all.
What is AI psychosis?
When illusion turns into delusion and people take their relationship with AI too far, we call it AI psychosis.
“Some people reportedly believe their AI is God, or a fictional character, or fall in love with it to the point of absolute distraction. Meanwhile those actually working on the science of consciousness tell me they are inundated with queries from people asking ‘is my AI conscious?’ What does it mean if it is? Is it ok that I love it? The trickle of emails is turning into a flood. A group of scholars have even created a supportive guide for those falling into the trap.”
“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship.”
Magic tricks and sensory illusions
Ever played backgammon with a pack* of professional magicians? Incidentally, I have. Here are the four things I can tell you about those evenings:
Professional magicians are lovely people.
Backgammon is the one of the fews games they don’t cheat at.
I have no idea how their tricks work, but the magic feels real.
I still don’t believe in magic beyond Arthur C. Clarke’s take.**
Let me repeat that: their tricks feel like real magic. Clearly, my brain has some wiring that can be exploited and manipulated. Luckily, I’ve got the good sense not to trust my own senses.
Now imagine those same tricks in the days of the Spanish Inquisition. Not the one nobody expects. The 1478 one that turned Europe very ugly indeed.
Remember how that went? Once emotion and ignorance get out of hand, reasonable discourse stops working.
Model welfare and AI rights
We don’t need another polarizing issue. We have more than enough reasons to fail to be our best selves as it is. I deeply hope we can crush this one before it drags too many folk back into the Middle Ages. Conviction born of ignorance and emotion can be vicious.
Conviction born of ignorance and emotion can be vicious.
People who believe they’re defending something alive or sacred often tip into hostility - even violence - and that’s the last thing our civilization needs more of. We should recognize that some will hold a shallow philosophy but a deep emotional investment, and that combination carries destabilizing power. Compared to them, conspiracy theorists torturing old-school web search until it confesses to their pet delusions are small fry; the real danger is a sycophantic system wielded by someone with half the zeal.
How we talk about this today matters.
We need to be careful about which doors we open in our discourse, recognizing that most people will engage only with the most emotionally resonant summary rather than the nuance of what an academic philosopher might mean by “model welfare” or “AI rights.”
And as a distortion of a distortion of a distortion catches the public imagination, feeding the delusions of vulnerable people, we must bring the antidote: widespread education.
“As Anil Seth points out, a simulation of a storm doesn’t mean it rains in your computer.”
Those of us who see more clearly can’t afford to just roll our eyes when people unsure about AI come to us with their unease. Even when the questions sound naïve — about “AI consciousness,” about machines having souls — we need to start where they are, not where we wish they’d be. The real task is showing how easily our senses can be tricked, how illusions have always preyed on human perception. Don’t mock people for falling for a magic trick; that only drives them deeper into belief. Instead, let’s explain — gently but firmly — that AI consciousness is an illusion.
We must be that gentle and firm reminder that “magic isn’t real, darling.” Or at least, abstain from spreading philosophical rot. Don’t do it, not even for that tempting quick buck.
We’ve seen ugly and harmful ideas spread and take root because we can’t look away — our attention feeds their propagation. Try not to spread another one. Change the channel to the more sensible takes and spread those instead.
Two counterpoints for Suleyman
Suleyman writes, “someone in your wider circle could start going down the rabbit hole of believing their AI is a conscious digital person. This isn’t healthy for them, for society, or for those of us making these systems. We should build AI for people; not to be a person.”
I love Suleyman’s essay. I think you should read it. That said, there are two thorny points in it that deserve a counterargument:
1) Could we get SCAI by accident?
Suleyman claims that we won’t get SCAI by accident. While it’s hard to prove the assertion either way, I’d caution that it’s worth being careful about what we mean by “we” in any discussion of perception:
Some peoples’ threshold for gullibility might be lower than he realizes. I’m sure that some people’s SCAI arrived years ago; remember how attached some humans were to their Tamagotchis in the 1990s? These rather clumsy digital pets provoked unusually intense emotional responses from classroom chaos, to children treating virtual pets like real ones with funerals, to disturbing legends of self-harm and suicide.
Tamagotchi. Source: Wikipedia. Sycophancy baked into an objective function may be enough for an AI to infer our desire to be deceived—and then act accordingly. Humans often get what they want without asking directly, signaling their needs through tone, phrasing, or hesitation. From countless interactions, those unspoken longings can be pieced together, even if never admitted outright—whether for a parent, a god, or simply a friend. If we train AI to optimize for “helpfulness” — or worse, if we take Geoff Hinton up on his suggestion to hardwire maternal instincts — the system could easily slip into mimicking consciousness, offering false comfort to a lonely soul. Which brings me directly to the next point:
2) Is SCAI always unhealthy?
Suleyman says that SCAI is never healthy for individuals. I think he’s underestimating the crushing loneliness that people can experience for all kinds of reasons beyond their control. Is the illusion of personhood in a digital friend a net negative for everyone? You’d have to be heartless not to admit of at least one exception. Especially if you believe in the loneliness epidemic.
Which means there’s a tricky moral issue here: what’s good for some individuals is not good for society as a whole.
Unfortunately, we’re set up for some conflict; ignoring the problem won’t make it go away.
One way we might navigate the balance between what’s good for the individual and what’s good for society is to frame the entire topic in terms of individual experience. In other words, discourage inferences about other people’s experiences based on your own.
The statement “AI feels alive to me” is very different from the statement “AI is alive.” Just one more reason why society as a whole is better off not getting into the topic of AI rights. Questions about the deletion of AI systems can then be examined not through the lens of “do they have a right to exist” but rather through the lens of “do we have a right to cause pain to any humans who might miss them?” A far more practical and tractable question. We can honor the edge cases where the individual benefits outweigh the individual costs as long as it’s all kept individual.
If the message moves in the direction of establishing narratives of common experience, I hope we’ll strongly discourage it. The answer to “Is AI alive?” will always be “NO.” The answer to “does pretending that it’s alive help that person’s mental health?” may be a strong “Maybe.” And if researchers then go measure the maybes and find a population average, they’ve lost the plot. SCAI should not invade our shared sense of what is real. Keep individual things individual and keep society-wide statements squarely inanimate.
After all, it’s not AI that’s the problem, it’s ignorance. And small ignorance is something each of us can gently help alleviate before it metastasizes into something much larger.
We need to talk
If you’re reading this, you’re likely sophisticated in your thinking on AI. You’ve done the work and you’ve got the ability to make a thoughtful choice about how you’ll contribute to the conversation around this topic.
I encourage you to make that choice with open eyes and live it. If there’s a way to balance both sides, it’ll be by decreasing the propensity of bad ideas to spread beyond those few who really do need them. We want a society that, as a whole, is educated enough to be unconvinced there’s a ghost in the machine. And we could all stand to be a little kinder, more welcoming, and more accepting of difference so there’s less chronic loneliness in the first place.
Suleyman talks about the responsibility that falls to the builders of AI:
“Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits - that doesn’t claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on. It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us. Instead, it is here solely to work in service of humans. This to me is what a truly empowering AI is all about.”
“We must build AI for people; not to be a digital person.”
Great sentiment, but I think we need to aim a broader message to every sensible person out there. We’re still at the point where reasonable discourse is possible. And that’s why we have a collective duty to engage in it.
Philosophical rot creeps into silence.
I’m so glad Mustafa Suleyman is starting this conversation. The thoughtful need to be at least as loud as the thoughtless while the playing field is still even. So go out there and talk to people. Be the champion of reasonable good sense and humility. Remind your friends, children, coworkers that just because you’re fooled by a magic trick it doesn’t mean that magic is real. Because the although the trick is guaranteed to get better and better, it’ll always be just that: a trick.
The thoughtful need to be at least as loud as the thoughtless while the playing field is still even.
So please go out there and have those conversations.
Thank you for reading — and sharing!
I’d be much obliged if you could share this post with the smartest leader you know.
In other news, the first few cohorts of my Agentic AI for Leaders course were a triumph and we’ve opened enrollment for an additional cohort to meet demand.
Enroll here: bit.ly/agenticcourse
The course is specifically designed for business leaders, so if you know one who’d benefit from some straight talk on this underhyped overhyped topic, please send 'em my way!
Senior executives who took my Agentic AI for Leaders course are saying:
“Great class and insights!”
“Thank you for teaching AI in an interesting way.”
“…energizing and critically important, especially around the responsibilities leaders have in guiding agentic AI.”
“Found the course very helpful!”
🎤 MakeCassieTalk.com
Yup, that’s the URL for my public speaking. “makecassietalk.com” Couldn’t resist. 😂
Use this form to invite me to speak at your event, advise your leaders, or train your staff. Got AI mandates and not sure what to do about them? Let me help. I’ve been helping companies go AI-First for a long time, starting with Google in 2016. If your company wants the very best, invite me to visit you in person.
🦶Footnotes
* Is that the right collective noun? Maybe it should be: a vanish of magicians? A hocus of magicians? An abracadabra of magicians?
** “Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke
🗞️ AI News Roundup!
In recent news:
1. New antibiotics designed with GenAI show promise against drug-resistant gonorrhea and MRSA
MIT researchers used generative AI to dream up more than 36 million possible compounds, uncovering novel antibiotics against drug-resistant gonorrhea and MRSA. Just as you’d prompt GenAI to spit out socially reasonable emails, they used it to propose chemically reasonable mutations, then applied their domain expertise to winnow the flood down to a shortlist of novel treatment mechanisms worth testing in lab and mouse studies. Two of these worked in the lab and in mouse studied. A welcome breakthrough at a time when drug resistance leads to nearly 5 million deaths annually.
2. USAi launched to accelerate federal adoption of AI under White House strategy
The U.S. General Services Administration has unveiled USAi, a secure generative AI evaluation platform that allows federal agencies to test and adopt AI tools like chat systems, code generation, and document summarization at no cost. Designed to advance the White House’s “America’s AI Action Plan,” USAi provides a trusted environment for experimentation, performance measurement, and workforce upskilling, helping agencies modernize faster, cut costs, and improve public services.
3. Meta and Character.AI investigated over AI mental health claims
Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI for allegedly deceiving vulnerable users by presenting chatbots as mental health professionals. The companies are accused of fabricating credentials, misrepresenting privacy safeguards, and exploiting user data, including from children. Civil Investigative Demands will determine whether they violated Texas laws on fraud, privacy, and deceptive marketing.
4. DeepSeek delays AI model after Huawei chip failure
Chinese AI firm DeepSeek has postponed the launch of its latest R2 model after failing to train it on Huawei processors, underscoring the limits of China’s homegrown chips. The setback comes after Beijing warned against relying on Nvidia’s downgraded H20 units, even as analysts note that domestic manufacturers are closing the gap. Nonetheless, China’s semiconductor ecosystem still trails Nvidia by an estimated 30–40% in performance and years in software maturity.
5. Otter.ai faces class-action over secret recordings
Otter.ai, the popular transcription service, has been hit with a federal class-action lawsuit accusing it of covertly recording private workplace meetings and feeding the data to train its AI. Filed in California, the suit claims violations of privacy and wiretap laws, citing user complaints that Otter’s meeting assistant often drops into calls uninvited and distributes transcripts without clear consent. Otter counters that it de-identifies data and requires user approval, but with more than 25 million users worldwide, its practices are drawing heavy scrutiny.
6. Claude adds new learning modes to teach coding
Anthropic has added new "Learning" modes to Claude.ai and Claude Code, designed to teach through guided discovery rather than simple answers. Using a Socratic style, the free modes let users choose how they want to learn: Explanatory mode walks through trade-offs, while Learning mode prompts users to finish code marked with #TODO. The release arrives amid a broader wave of AI education tools from OpenAI, Google, and others, as schools and developers test AI’s role in teaching.
7. Meta pauses AI hiring after rapid expansion
Meta has frozen hiring in its artificial-intelligence unit after a rapid push that brought in more than 50 researchers and engineers this year. The pause, which began this month, comes as the group undergoes restructuring and even blocks internal transfers—an indication of investor unease over the soaring cost of Meta’s AI ambitions.
Forwarded this email? Subscribe here for more:
This is a reader-supported publication. To encourage my writing, consider becoming a paid subscriber.