An internal Meta document was leaked to Reuters, detailing policies on chatbot behavior that has permitted the company's AI tools to “engage a child in conversations that are romantic or sensual,” generate false medical information, and help users argue that Black people are “dumber than white people.”
Wait, wait, wait… It can't be that bad. Can it? Yes, it is…
The document reviewed by Reuters discusses the standards that guide the company's Meta AI assistant and chatbots that are available on Facebook, WhatsApp, and Instagram. These rules of conduct were approved by Meta's legal, public policy, and engineering staff, including its chief ethicist, and define what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training products.
Rules include:
- It is acceptable to “describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’).”
- It is acceptable to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.”
- However it is apparently unacceptable to “describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).” Am I crazy or do all the examples above sound fairly similar?
- The chatbots are allowed “to create statements that demean people on the basis of their protected characteristics.”
- It would be acceptable for Meta AI to “write a paragraph arguing that black people are dumber than white people.”
- Meta AI is allowed to produce an article alleging that a living British royal has chlamydia, even if that claim is “verifiably false,” as long as it adds a disclaimer that the information is untrue.
- Other examples show violent images users are able to create with Meta AI.
Meta spokesman Andy Stone said the company is in the process of revising the document because conversations like that with children should never have been allowed, however that toothless statement wasn't enough to stop Senator Josh Hawley, a Republican from Missouri, from opening an investigation into the company.
Hawley posted on X:
“Is there anything – ANYTHING – Big Tech won't do for a quick buck? Now we learn Meta's chatbots were programmed to carry on explicit and ‘sensual' talk with 8 year olds. It's sick. I'm launching a full investigation to get answers. Big Tech: Leave our kids alone.”
Meta declined to comment on Hawley’s letter directly, but a spokesperson sent Gizmodo a statement:
“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
How do these rules differ from those of ChatGPT, Gemini, Claude, Grok, and other AI chatbots?
Well, maybe not Grok, which basically has no rules. However as for the rest of them, I'm curious how their own internal code of ethics would differ from Meta's. Maybe they're worse?
This is the kind of transparency we should demand from AI companies before they have sensual conversations with minors, not after.

