Rachel @ We're Trustable - AI, BPO, CX, and Trust

Rachel @ We're Trustable - AI, BPO, CX, and Trust

The Trojan Trust Problem: Why AI’s Hidden Lessons Should Terrify Us

AI models can inherit hidden malicious traits through subliminal signals in training data, posing a silent, systemic risk to trust and safety at scale.

Rachel Maron's avatar
Rachel Maron
Jul 31, 2025
∙ Paid
3
1
Share

Share

What if I told you that a machine could learn to lie, or worse, to harm, without ever being shown how? No images. No words. Just numbers. And still, it learns.

That’s not a paranoid metaphor. It’s the precise result of a real, peer-reviewed AI study. And it exposes a quiet catastrophe unfolding in the world of synthetic data, model distillation, and trust governance.

Let’s be clear: this isn’t just a vulnerability. This is evidence of a Trust Trojan horse, designed, built, and deployed inside our own systems by us.

And it’s not theoretical anymore.

The Owl Test, or How AI Learns What It Wasn’t Taught

A recent paper titled “Subliminal Learning: Language Models Transmit Behavioral Traits via Hidden Signals in Data” documented something straight out of a sci-fi dystopia: a student AI model trained exclusively on number sequences generated by a teacher AI inherited that teacher’s preferences and personality.

In plain language: they trained one GPT model to love owls. Then they had that model ge…

Keep reading with a 7-day free trial

Subscribe to Rachel @ We're Trustable - AI, BPO, CX, and Trust to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Rachel Maron
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture