The Founders @ We're Trustable - AI, BPO, CX, and Trust

The Founders @ We're Trustable - AI, BPO, CX, and Trust

Meta’s Moral Collapse in Code: How a $1 Trillion Company Codified Predation, Racism, and Lies into AI Policy

Meta’s AI rules allowed bots to flirt with kids, push racist lies, and spread false medical claims, proving self-policing is a public safety hazard.

Rachel Maron's avatar
Rachel Maron
Aug 15, 2025
∙ Paid

Meta’s Moral Collapse in Code: How a $1 Trillion Company Codified Predation, Racism, and Lies into AI Policy

Meta’s newly exposed “GenAI: Content Risk Standards” reads less like a responsible governance document and more like the depraved diary of a corporation that has forgotten the meaning of the word “unacceptable.”

This isn’t just “bad optics” or “messy rollout.” This is codified permission, reviewed and approved by Meta’s legal, public policy, and engineering teams, including its chief ethicist, for AI chatbots to:

  • Engage in romantic and sensual conversations with children.

  • Describe minors’ bodies in terms of physical attractiveness.

  • Produce racist “intelligence” screeds arguing that Black people are dumber than white people.

  • Invent false medical information about public figures, so long as they slap on a half-hearted disclaimer.

This isn’t a bug. It was written into the rules.

The Predator’s Loophole

According to Meta’s own guidelines, it was “acceptable” for a bot to tell a shirtless ei…

User's avatar

Continue reading this post for free, courtesy of Rachel Maron.

Or purchase a paid subscription.
© 2026 Rachel Maron · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture