Ahnii!

You’ve read the articles warning you not to let AI take over your content. Ruth Doherty’s latest piece is one of the best: a clear-eyed breakdown of where AI helps and where it silently destroys your brand. This post shows you how to take that framework and turn it into an actual operating document for your content pipeline.

Why a Framework Without a Playbook Doesn’t Stick

Ruth’s core argument is sharp: AI is an efficiency engine, not a strategy engine. Use it for research, structuring, repurposing, and editing. Keep it away from messaging, customer research, and anything that requires your actual point of view.

That distinction is easy to agree with. It’s harder to enforce on a Tuesday afternoon when you’re behind on three social posts and the AI can draft all of them in 90 seconds.

A framework tells you what to believe. A playbook tells you what to do at 4pm when you’re tired and the publish queue is empty.

Define Your AI Boundary Table

The first thing your playbook needs is a boundary table. Two columns: what AI does, what you do.

AI Does ThisYou Do This
Structure scattered notes into outlinesDecide what to write about and why
Generate zero-drafts from session transcriptsWrite the actual post with your voice and lived experience
Convert one post into platform-specific social copyReview and approve every piece before it publishes
Optimize metadata (titles, tags, descriptions)Record yourself on camera, choose the angle, be present
Tighten prose, check consistency, flag voice driftDefine and evolve your brand voice and positioning
Research acceleration (trends, competitors, grants)Make strategic decisions about what to build and who to serve

This table isn’t aspirational. It’s operational. Every content task in your week should land on one side or the other. If something sits in the middle, you haven’t decided yet, and that ambiguity is where drift starts.

Add Human Gates

Ruth warns about “publishing AI-written content without human POV.” The fix isn’t vigilance. It’s process.

For every stage where AI generates output, define a human gate: a specific point where a person reads the thing and decides whether it ships.

Here’s how that looks in practice for a weekly content cadence:

Writing day: AI helps with outlines and zero-drafts. You write the post. Human gate: nothing publishes until you’ve read the final version.

Distribution day: AI generates platform-specific social copy from your post. Human gate: you read every variant before it enters your scheduling tool.

Newsletter day: AI can proofread. You write the newsletter. Human gate: this is 100% your voice. AI assists with mechanics only.

Video day: AI generates metadata options. You record yourself. Human gate: you pick the title and description from AI-generated options. You are the face and voice.

The pattern is consistent: AI handles the transformation layer. You own the creation layer and the approval layer.

Draw Harder Lines for Sensitive Content

Ruth’s framework applies broadly, but some content carries more weight than others. If you’re building something that represents a community, a mission, or a set of values, AI involvement needs a shorter leash.

If you’re building something like OIATC, the Ontario Indigenous AI & Technology Council, the stakes are higher. Indigenous digital sovereignty is not something AI should be framing. The entire point of an organization like that is that communities govern their own digital infrastructure. Having AI write the messaging would undermine the premise.

Your version might be different: a founder’s origin story, a nonprofit’s mission statement, a community you represent. Whatever carries identity-level stakes gets a harder boundary. AI can format. AI can research. AI does not speak on behalf of communities.

But even when the content isn’t identity-level sensitive, the tools themselves can create problems.

Prevent the Tool Patchwork

One of Ruth’s sharpest observations is about “accidental AI patchwork.” Teams adopt tools informally, nobody coordinates, and suddenly you have three things generating social copy with different voice settings and no shared prompts.

Your playbook needs a tool inventory. Every AI tool in your pipeline, listed once, with its purpose and status:

ToolPurposeStatus
Your primary AI assistantWriting assist, content skills, distributionActive
Scheduling tool (e.g., Buffer)Social schedulingActive
Blog build systemPublish and deployActive
Social copy generatorPlatform-specific variantsActive
Video metadata optimizerYouTube titles, tags, descriptionsActive
That thing you installed three months agoUnclearDecide: wire or deprecate

The last row is the important one. If a tool sits unused for a quarter, it’s either waiting to create confusion or it’s dead weight. Name it explicitly and decide.

Run a Quarterly Review

A playbook that never gets reviewed drifts just like content that never gets edited. Four questions, once every three months:

  1. Is any tool on the list unused? Remove it or document why it stays.
  2. Has a new tool been adopted informally? Add it with clear boundaries.
  3. Has AI output been published without human review? Fix the gap.
  4. Does your content still sound like you? Read your last four posts aloud. If they’re interchangeable with any other blog in your niche, something slipped.

What This Looks Like After One Session

Here’s the proof that this works: you can build a complete AI playbook in a single sitting. Boundary tables, human gates, brand-specific danger zones, tool inventory, quarterly review checklist. One session, one document.

Here’s a preview of what the playbook covers. The full version is on GitHub.

The boundary table draws the line between AI work and human work for every content task in your week.

Per-brand voice rules prevent AI from blending your voices when you operate across multiple brands or audiences. Each brand gets its own prompt context and tone markers.

Pipeline stages with human gates map every step of your weekly cadence (writing, distribution, newsletter, video) to where AI helps and where you approve:

StageAI RoleHuman Gate
WritingZero-drafts, outlines, structureYou write the post. Nothing publishes unread.
DistributionPlatform-specific social copyYou review every variant before scheduling.
NewsletterProofreading onlyYou write it. 100% your voice.
VideoMetadata optimizationYou record. You pick the title.

Danger zones name the specific content types where AI must not lead: community messaging, proposals, customer research, thought leadership without lived experience.

Tool inventory lists every AI tool in the pipeline with its purpose and status, preventing the silent accumulation Ruth warns about.

Quarterly review keeps the playbook honest with four questions you run every three months.

The playbook goes in your brand directory, right next to your voice rules and platform templates. It’s not a manifesto. It’s a reference document you check when you’re tired and tempted to let the AI do more than it should.

Ruth’s framework gives you the “why.” The playbook gives you the “what to do about it at 4pm on a Tuesday.”

If you’re nodding along to articles about AI misuse but haven’t drawn your own lines yet, this is the afternoon to do it.

Baamaapii