The headlines are screaming about criminal liability because someone used a chatbot and then pulled a trigger. It is the oldest trick in the media playbook: find a tragedy, find a piece of tech nearby, and manufacture a causal link that doesn't exist. Lawyers are salivating. Activists are clutching pearls. And the public is being fed a lie that Large Language Models (LLMs) are sentient puppet masters.
The premise of this "criminal probe" is fundamentally broken. It assumes ChatGPT is an agent. It isn't. It's a mirror. If you stare into a mirror and don't like the person looking back, you don't sue the glass manufacturer. Yet, here we are, pretending that a statistical prediction engine—a sophisticated version of autocomplete—is the mastermind behind human violence.
The Myth of the Algorithmic Mandate
The "lazy consensus" argues that OpenAI failed to implement sufficient guardrails, thereby "encouraging" or "failing to prevent" a shooting. This logic is a cancer on personal responsibility.
When a person commits an act of violence, the search for a scapegoat usually starts with heavy metal lyrics, then moves to "violent video games," and has now landed on AI. We’ve seen this movie before. In the 90s, the Doom developers were the villains. In the 2000s, it was Grand Theft Auto. Now, it’s a text box.
We need to be clear about what an LLM actually does. It processes a prompt through a transformer architecture, calculates the probability of the next token based on a massive dataset of human-written text, and outputs that token. If a user spends hours massaging a prompt to bypass safety filters—a process known as jailbreaking—they aren't "tricked" by the AI. They are actively engineering a specific outcome.
To hold the developer responsible for the output of a tool that was deliberately manipulated is like suing a hammer manufacturer because a murderer decided to use the claw end.
The Safety Filter Paradox
The industry is obsessed with "safety," but "safety" is often just a polite word for lobotomization. Every time OpenAI, Google, or Anthropic adds a layer of moralizing "guardrails," the model becomes less useful for legitimate users and more of a challenge for bad actors.
I’ve watched companies dump tens of millions into RLHF (Reinforcement Learning from Human Feedback) trying to teach a machine "ethics." It’s a fool’s errand. Ethics are subjective, cultural, and shifting. By trying to hardcode a moral compass into a machine, we create two dangerous side effects:
- False Sense of Security: Users start to believe that if the AI said it, it must be "vetted" or "safe."
- The Red Team Arms Race: The more restrictive the filters, the more prestige hackers find in breaking them.
The criminal probe into OpenAI ignores the fact that no amount of code can fix a broken human psyche. If someone is looking for a reason to do harm, they will find it in a Bible, a manifesto, a subreddit, or a chatbot. The medium is irrelevant. The intent is everything.
The Jurisprudence of Autocomplete
Let’s talk about Section 230 and the looming legal disaster. For decades, the internet thrived because platforms weren't liable for what users posted. The "contrarian" fear now is that because AI generates content, it loses that protection.
But there is a massive difference between authoring and assembling.
ChatGPT doesn't "know" what a gun is. It doesn't "know" what death is. It knows that in its training data, certain words frequently appear in proximity to other words. If a user asks for a plan to commit a crime and the AI provides it, the AI is simply reflecting the vast library of human knowledge—including the dark parts—that we’ve uploaded to the internet for thirty years.
The push to criminalize OpenAI is a backdoor attempt to regulate the entire internet. If OpenAI is liable for what ChatGPT says, then Google is liable for what its search results show. Then your ISP is liable for the packets it carries. This is a slippery slope that ends in a sterile, permission-based digital world where nothing interesting or provocative can ever be said.
Dismantling the "People Also Ask" Nonsense
"Can AI radicalize people?"
No. People radicalize themselves. They seek out Echo chambers. If an AI "radicalizes" you, you were already looking for a cliff to jump off. The AI just provided a map you could have found on any dark-web forum in three clicks.
"Should OpenAI be held to the same standards as a publisher?"
Absolutely not. A publisher selects, edits, and endorses content. OpenAI provides a compute-heavy interface for a probability distribution. Calling OpenAI a "publisher" is like calling a pencil company an "author."
"What about the victims?"
Tragedy does not justify bad law. We can have immense empathy for victims while simultaneously refusing to burn down the foundations of technological progress to satisfy a need for a high-profile villain.
The High Cost of the "Safe" Delusion
The real danger isn't that an AI will tell someone to pull a trigger. The real danger is the "Safety-Industrial Complex."
There is an entire economy now built around "AI Ethics" and "AI Safety." These groups rely on fear-mongering to keep the grants flowing and the consulting fees high. They need AI to be dangerous so they can stay relevant. They are the ones pushing the narrative that OpenAI is a "criminal" entity because it helps them justify their own existence.
I have sat in rooms where "experts" argued that an AI refusing to write a poem about a controversial politician was a triumph of safety. It wasn't. It was a failure of functionality. When we prioritize "not offending" or "not being misused" over "being useful," we create tools that are mediocre at everything and masterful at nothing.
Stop Blaming the Mirror
We are witnessing a mass abdication of personal agency.
If a man drives his car into a crowd, we don't investigate the GPS for "allowing" him to navigate to that location. If a woman uses a kitchen knife to commit a crime, we don't probe the metallurgy of the blade. But because AI feels "magical" to the scientifically illiterate, we treat it like a sentient co-conspirator.
This criminal probe is a distraction. It's a way for society to avoid the much harder conversations about mental health, social isolation, and the breakdown of community. It is much easier to sue a multi-billion-dollar tech giant than it is to fix the underlying reasons why someone would want to cause harm in the first place.
OpenAI’s "role" in any shooting is the same as the role of the air the shooter breathed or the shoes they wore. It was present, but it was not the cause.
The moment we start arresting developers for the way their software is abused by the fringes of society is the moment innovation dies. We will be left with "safe" AI that can only give us cookie recipes and weather reports, while the people who actually want to do harm simply move to open-source models hosted in jurisdictions that don't care about our moral panics.
We don’t need more guardrails. We need more reality.
Stop looking for the ghost in the machine. The ghost is us.