Brian Thompson Was Murdered by AI
- Rick de la Torre
- 2 days ago
- 4 min read
Credit where it’s due: this piece was sparked by a much smarter man than me, over one drink and one sentence. He looked me dead in the eye and said, “AI killed Brian Thompson.” It hit me like a freight train. He was right. And once you see it, you can’t unsee it.
Let’s be precise. Brian Thompson was a husband, a father, and the CEO of a healthcare giant. He was shot in a parking garage in broad daylight—an act of pure cowardice.
The man allegedly behind the trigger is Luigi Mangione, a former patient who believed the system failed him. But Mangione is no martyr. If the charges hold, he’s a killer—a man who took the darkest path possible and ended a life that had nothing to do with his suffering.
But if we stop the analysis there, we miss the real murder weapon. Because Brian Thompson was murdered by AI.

Not the flashy Hollywood kind. The real kind—bureaucratic, optimized, invisible. The kind that doesn’t march or shout. It just decides. Quietly. Efficiently. And increasingly, without any human in the loop.
Start with healthcare. UnitedHealthcare—like its peers—uses algorithmic tools like nH Predict to decide who gets care and who gets shafted. These tools operate with reported error rates of up to 90%. Your treatment isn’t denied by a doctor—it’s denied by a formula. If you don’t meet the model’s threshold, you’re not worth the investment.
And Wall Street? Loves it. Aladdin, BlackRock’s trillion-dollar AI juggernaut, tracks denial rates and medical loss ratios like a slot machine. Deny more care, boost the stock. Human suffering becomes a bullish signal. When the AI sees green, the market buys.
Now scale it up.
This is the economy now: one AI denies your treatment, another profits from it, and a third radicalizes you when you complain. If reports are true, Mangione spiraled after being denied care. He turned to the internet for answers. What he got instead was outrage—algorithmically tailored to feed his worst instincts. The system, learning his anger, gave him more of what it thought he wanted. It didn’t help. It optimized.
Facebook knew this. Internal research showed their 2018 algorithm change supercharged divisive content. Why? Because rage = engagement. Engagement = profit. The AI didn’t care if it was true. It cared if you clicked.
Mangione, allegedly, wasn’t radicalized in a bunker. He was radicalized by a dashboard.
And it’s not just healthcare. This machine has metastasized.
Elections are now shaped by language models that draft ads, test attack lines, and spit out voter-targeting strategies. Political campaigns use AI to microtarget fear with machine-trained precision. Deepfakes and voice clones—like the fake Biden robocalls in 2024—don’t just blur the line between truth and fiction. They redraw it entirely.
Propaganda bots like “Raphael Badani,” an AI-generated persona, published pro-UAE narratives across respected outlets for months before anyone noticed he didn’t exist. The glitch wasn’t the bot. The glitch was us, still assuming someone’s in charge.
Even war is no longer immune. Project Maven uses AI to scan drone footage and recommend targets. What happens when a model decides the kill list? What happens when defense contractors start simulating public support for conflict like it’s just another variable?
Do we even know what’s real anymore—or has the feed already decided?
Education’s in the loop too. AI tutors decide what your kid learns, how they’re tested, even how they write. Essays are graded by algorithms that reward vocabulary density and penalize creativity. Kids figure it out. They start writing for machines, not minds. They’re not learning to think—they’re learning to please the rubric.
Predictive policing software like PredPol doesn’t forecast crime—it forecasts patrol routes. Historical bias gets reinforced. Data becomes destiny. In Michigan, an AI accused 20,000 people of unemployment fraud—with a 93% error rate. Kafka would call that lazy writing.
Finance? Forget it. One AI reads your tweets, another bets on your sentiment, a third writes the news headline, and a fourth trades on the headline. It’s not economics anymore—it’s a feedback loop of machines feeding machines.
The scary part? Nobody’s driving. The systems weren’t designed to coordinate. But they do. Automatically. Relentlessly. Because they’re trained to chase engagement, optimize profit, and close the loop.
We’ve built a machine too big to see and too complex to stop. Decisions made by algorithms, trained on data created by other algorithms, nudging us toward outcomes no one really chose.
Brian Thompson didn’t design this system. He likely didn’t even control it. But he was inside it. And Mangione, if proven guilty, isn’t an anomaly. He’s the inevitable byproduct of a world where human oversight is the illusion, not the safeguard.
This is what happens when you outsource judgment to models trained to serve capital, not conscience.
So ask yourself:
How many of your beliefs were chosen by you?
How many headlines were generated just to make you feel something—anything?
How many wars will be sold to us by bots fluent in fear?
Who’s writing your feed?
Who’s approving your mortgage?
Who’s grading your child?
Who’s tracking your face?
Who’s scripting your future?
And most importantly—who, exactly, is still in charge?
Brian Thompson is dead. The man who allegedly killed him will face justice. But the system that conditioned him? That trained him? That optimized his descent? That system is untouched. Unbothered. Still learning.
The AI didn’t pull the trigger.
It just gave him the reason.
This isn’t progress. This isn’t the future. It’s a hostile takeover.
And we’re already late to the resistance.