Blurring Reality: AI and the Perceptual Breakdown

This morning, my brother Mike Radocchia sent me an article called The Rise of Parasitic AI. Mike’s a neuroscientist focused on digital mental health and WAY smarter than I am. So our conversations about the future of humanity are always… let’s just say, very cerebral.

It sparked me to reflect on a question I’m often asked:

Will AI Will Fragment our Shared Reality?

The Answer? It Already Has.

When most people hear psychosis, they think of delusions or hallucinations. What I’m pointing to isn’t a psychological diagnosis, but rather, a cultural condition: a Perceptual Breakdown.

Think of it as a slow, society-wide reality drift. A collective fracture in how we determine what is real.

A Story from 2023

I remember it vividly. It was early 2023 and I’d just returned from fieldwork in the Amazon. I stepped onstage at Dell Technologies World with Dell's CTO, John Roese and fellow futurist, Sinead Bovell to talk AI, quantum computing, and immersive realities.

ChatGPT had just swept the world by storm after its November 2022 release. AI had entered the mainstream. Not as an abstract futuristic concept from an 80s flick, but as a tool everyone was suddenly using.

Onstage, I said something that would feel eerily prophetic some years later:

“AI is blurring the lines between what’s real and what’s not.”

I explained that when the trifecta of emerging technologies — AI, quantum computing, and immersive realities — converged with embedded interfaces (think brain-machine interfaces such as neural implants, nanotechnology, etc.), people would begin to lose touch with reality. Not in a distant dystopian sci-fi future, but sooner than we imagined.

At the time, I thought I was anticipating something five to ten years away. But as any good futurist learns, sometimes the timeline accelerates far faster than we could imagine.

... and never make predictions! ;)

You can watch that conversation here:

Defining AI Psychosis

AI psychosis is not about individuals “going crazy.” It is the subtle, widespread erosion of our shared reality.

Let me illustrate my point. It might look like this:

  • A teenager scrolling TikTok who can’t tell if the influencer they’re watching is real or AI-generated.

  • A grandmother convinced by a synthetic voice that her grandchild is in danger. (FTC)

  • A CEO making a decision based on a memo that turns out to be algorithmically fabricated. Or a CFO erroneously wiring $25m based on a deepfake video conference call. (CNN)

  • A community grieving after a viral story, later revealed as a deepfake, spreads faster than any correction could catch it. (The Guardian)

  • A nation divided over social media posts generated by and amplified by foreign adversaries, through a sophisticated information-to-influencer-to-bot network. (See BLACKBIRD.AI and their work in Narrative Intelligence)

Meanwhile, new cultural currents amplify the drift: some creators market “sentient” AI guides for spirituality and life decisions, an alluring mix of anthropomorphism and apophenia (pattern recognition on steroids) that can pull vulnerable minds deeper into algorithmically-shaped narratives.

Just this month, Wired reported on this concerning trend:


When we can no longer reliably anchor our sense of “what happened” or “what’s real,” we slip into a kind of cultural psychosis.

Not madness in the brain. It's far more subtle and nefarious than that. Rather, a sort of parasite that feeds off of unsuspecting hosts. Devouring data, and leaking delusion in its wake. Ultimately leading to a number of second and third order effects: most notably socio-cultural fragmentation.

Why This Matters

Why does it matter if people are tricked by a fake video or a synthetic voice? Isn’t this just another version of misinformation? Don't we already have the tools to fight this?

This goes beyond “misinformation.”

AI now generates not just content but context. Personalized, embodied experiences that feel true. That shift is why the stakes are so high for provenance and trust. Even major tech leaders have flagged deepfakes as a top societal risk, and regulators have scrambled to warn consumers about voice-cloning scams, for example, as an impetus to explore countermeasures.

Research is also starting to quantify what many of us intuitively sense:

Heavy LLM assistance can dampen cognitive engagement and externalize thinking.

An MIT Media Lab study tracking neural and behavioral signals during writing tasks reported weaker executive control and lower originality in AI-assisted groups—raising questions about “cognitive debt” when we lean too hard on generative tools.

That’s the real psychosis: the collapse of the scaffolding we use to orient ourselves in the world.

A breakdown of what I call in my book, Radical Next (2025), society's operating system.

The Speed of Acceleration

Three years have passed since the November 2022 release of ChatGPT, and in 2025 deepfakes, synthetic news, AI influencers, and voice-cloned extortion have gone from edge cases to household stories.

I remember calling home to my family in 2022, a bit frazzled, saying, "How can I trust that it's you I'm speaking to? And I've not been SIM swapped?!" If you’ve seen the grandparent or fake-kidnapping calls, you know how convincing they can sound.

A few months later, I find myself on stage at Dell Technologies World, and John and I echoed sentiments that we'd need "secret passwords" with out families and colleagues.

So here's the thing: our nervous systems evolved to parse a physical world. We relied on the senses. Now we’re asked to parse synthetic worlds at computational speed. Which needs some sort of... extra sensory perception, it seems.

No wonder “What’s real?” feels less rhetorical and more existential. Or even a bit, dare I say, metaphyiscal?

What We Can Do About It

The answer isn't entirely clear. But it's certainly not to freak out! We must equip ourselves to navigate the change and adapt at ever-increasing, accelerating speeds. But here are a few ideas:

  1. Rebuild Provenance: Cryptographic media provenance, watermarking, and open registries

  2. Teach Verification as Literacy: We need shared protocols for checking sources, assessing synthetic risk, and pausing before amplifying

  3. Perceptual Security: Reconnection to our physical, cognitive, emotional, nervous system, intuitive security

A Closing Reflection

I’m often asked if AI will cause psychosis. Here’s my reframing:

AI is accelerating a Perceptual Breakdown

The danger isn’t a world of people losing their minds. It’s a society slowly losing its ability to share one. The good news: we’re not powerless. With better infrastructure, literacy, and incentives, we can fortify the very thing AI is testing: our collective anchor to truth.

Stay Rad!

Sources mentioned in-text

About Me:

Sam Rad Futurist | Anthropologist | Author of Radical Next: Reclaiming Your Humanity in a Post-Human World

If your organization is wrestling with this, I speak and advise on perceptual security, and human empowerment in this Age of Acceleration.

Booking: Kate DesRosier at Gotham Artists (kate@gotham-artists.com)

Next
Next

Will Quantum Computing Break Encryption?