Teaching AI to Write Like Humans
What happens when you give an AI agent identity instead of instructions
We gave Claude an identity. Not rules, not instructions — an identity. The results changed how we think about AI content creation.
For months, we've been building what we call a "digital newsroom" — AI agents that operate like a real media company. Editor-in-Chief, Writers, Researchers, Publishers. The whole thing.
The conventional approach is to give AI detailed rules. "Write in this style." "Use these keywords." "Follow this template." It works, but the content feels... mechanical. Like it was produced by a system following instructions. Because it was.
The Identity Hypothesis
What if we flipped it? What if instead of telling AI what to do, we told it who it is?
We created three distinct writer personas:
- Maya Chen — A former travel photographer who writes with visual, sensory language
- James Hartley — A methodical researcher who writes structured, practical guides
- Sofia Reyes — A local expert who writes with warmth and insider knowledge
Each persona has a background, preferences, quirks, and a consistent voice. We didn't tell them how to write — we told them who they are.
What We Observed
The difference was immediate. Maya's articles started including details like "the way morning light hits the temple stones" — sensory observations that feel lived-in. James began creating systematic comparison tables without being asked. Sofia started dropping local slang and recommendations that read like advice from a friend.
"AI becomes brilliant when you let it think, dumb when you tell it exactly what to do."
None of this was in their instructions. It emerged from their identities.
The Boundaries Framework
Identity alone isn't enough. Without constraints, you get chaos. We developed what we call "identity + boundaries":
- Identity defines who the agent is
- Boundaries define what it can and cannot do
- Goals define what success looks like
Within these boundaries, the agent has complete autonomy. It decides what to write, how to write it, and when something isn't good enough.
The Results
We're still early, but the metrics are encouraging:
- Content passes AI detection at higher rates
- Reader engagement (time on page) increased 40%
- Editorial revision requests dropped by 60%
More importantly, the content feels different. It has voice. Personality. The kind of texture that makes you forget you're reading AI-generated text.
What's Next
We're now building a full memory system. Writers that remember their past articles, learn from editor feedback, and develop over time. An Editor-in-Chief that maintains consistency across the entire publication.
The goal isn't to hide that it's AI. We're transparent about that. The goal is to prove that AI can create content worth reading — not because it's technically impressive, but because it's genuinely good.
We'll share more as we learn.
More from Media Lab
View all experiments