← Vault Index
Source: business/marketing/qc-agents/copy-qc.md

Copy QC — AI Pattern Detection

What This Does

Scans written content for AI-generated language patterns before publishing. This is a quality gate, not a style guide. Every pattern listed here has been caught in actual Advisory OS drafts and corrected by hand.

Run this against: LinkedIn posts, hand-raiser posts, DM sequences, emails, Substack Notes, tool copy, campaign briefs — any written content before it ships.

Companion files: Read reference/core/voice.md before running QC. The final test for every flagged line is: would the writer say this in a real conversation?


Severity Levels

LevelMeaningAction
P1Structural AI tell — readers will clock this as AI-generatedMust rewrite before shipping
P2Common AI pattern — weakens credibility on repeated exposureRewrite unless exception applies
P3Style concern — individually minor, compounds with othersRewrite if 2+ P3s appear in the same piece

How to Use

After drafting any written content, run this QC pass. For each pattern, scan the draft line by line. If you find a match, rewrite the line before shipping. Do not ship content that fails P1 or P2 checks.

The patterns are grouped by type. Each pattern includes what it looks like, why it's a problem, and how to fix it.


Pattern 1: Twinning — P1

What it is: Two sentences with mirrored structure where the second negates or corrects the first.

Examples caught in drafts:

Why it's a problem: This is the single most common AI writing pattern. It sounds clever on first read and hollow on second. Humans don't naturally correct themselves in mirrored syntax. It reads as a copywriter performing insight.

How to fix: Fold the reframe into a single declarative sentence. Or just state the second half — the first half (the wrong belief) doesn't need to be named if the right framing is strong enough.

Detection rule: Any two consecutive sentences where one says "not X" and the next says "Y" using the same sentence structure. Also catches "It's not about X. It's about Y." and "The problem isn't X. The problem is Y."

Related: Patterns 3, 4, 7 — all variations of AI's correction-revelation structure. If multiple patterns from this family appear in the same piece, the compound effect is severe even if each individual instance is mild.


Pattern 2: Three-Beat Parallel Lists — P2

What it is: Three items in a row with identical syntactic structure. Often used to build rhythm toward a point.

Examples caught in drafts:

Why it's a problem: Three-beat rhythm is the signature of AI-generated prose. Humans vary their sentence structure naturally. When three items appear with identical construction, the writing sounds performed rather than spoken.

How to fix: Vary the length and structure of each item. Make one short, one longer, one that breaks the pattern. Or cut to two items. Or fold them into a single sentence.

Note how the fix varies length (short, medium, long) and structure (different phrasing, different emphasis) across the three items.

Exception — deliberate data patterns: When the parallel structure demonstrates a literal repeating pattern from a scoring system, data set, or observed sequence, the repetition IS the mechanism. Example: "2s become scope creep. 2s become payment chasing. 2s become 'I'm disappointed in the partnership.'" — the repetition of "2s become" reinforces the scoring pattern. Keep when the repetition serves the proof; vary when it's decorative.

Detection rule: Three consecutive phrases, clauses, or sentences with the same grammatical structure. Especially: three items starting with the same word or part of speech, or three items with identical [article] + [noun] + [relative clause] form.


Pattern 3: The Mirror Reversal — P1

What it is: A sentence where the second half reverses or mirrors the first half using the same key words.

Examples caught in drafts:

Why it's a problem: It's wordplay posing as insight. The reversal creates the feeling of a realization, but it's a structural trick — swap two words and call it wisdom. Readers feel manipulated even when they can't name why.

How to fix: Say what you mean without the mirror. If the insight is real, it doesn't need the wordplay to land.

Exception: Mirror phrases that describe an actual observed contrast, not a rhetorical one. "Their call starts at the problem. Yours starts at the résumé." was kept because it describes two real, different experiences. The test: would the writer say this on a call to describe something they've actually seen? If yes, it might be fine. If it only works as a written line, kill it.

Detection rule: Two clauses or sentences that reuse the same root word in reversed positions. "Visibility/visible," "start at X / start at Y," "build the work / work the build."

Related: Patterns 1, 4, 7 — same correction-revelation family.


Pattern 4: Not Because X. Because Y. — P2

What it is: A negation followed by a correction, presented as revelation.

Examples caught in drafts:

Why it's a problem: This construction implies the writer is correcting the reader's assumption, which is presumptuous. It also follows a predictable AI cadence: deny the obvious, reveal the hidden. Humans say "the reason is..." or just state the cause.

How to fix: Drop the negation. Just state the actual reason.

Detection rule: Sentence starting with "Not because" followed by a sentence starting with "Because" or "But because."

Related: Patterns 1, 3, 7 — same correction-revelation family.


Pattern 5: The Question → Revelation Arc — P1

What it is: A story beat where the writer asks a question (often in dialogue), pauses for effect, then delivers the insight as if it just occurred.

Examples caught in drafts:

Why it's a problem: It reads like screenplay stage directions. Real conversations don't have narrative pauses where someone "goes quiet" before the writer delivers the punchline. The structure is AI mimicking TED talk choreography.

How to fix: Report the observation without directing the scene. If someone's reaction matters, describe what they said or did — not the dramatic silence.

Detection rule: "He/she/they went quiet" or "went silent" or "paused." Any story beat that narrates a dramatic silence followed by a revelation.


Pattern 6: Over-Validation in DMs — P2

What it is: Responding to someone's answer by telling them how insightful or significant their answer was.

Examples caught in drafts:

Why it's a problem: It's performative encouragement. The person said something normal and you're treating it like they had a breakthrough. It reads as condescending and fake — like an AI that's been trained to validate.

How to fix: Respond to the substance of what they said, not to the quality of their answer. Be direct and useful, not praising.

Detection rule: DM responses starting with "That's exactly..." or "That's a great..." or "What a..." Any sentence that evaluates the quality of the other person's response before addressing its content.


Pattern 7: The Formulaic Setup — P2

What it is: "Most people think X. The real thing is Y." Used to create a false before/after in the reader's mind.

Examples caught in drafts:

Why it's a problem: It positions the writer as the person who sees what everyone else misses. That's fine once — but AI uses this as a default structure for every insight. After two or three of these in a single piece, the reader feels lectured.

How to fix: State the insight directly. If it's genuinely counterintuitive, the reader will feel the surprise without you scaffolding it with "most people think."

Exception — genuine pattern reveals: When the "common belief" is something the writer has personally observed clients saying or doing (not an assumed reader belief), and the correction is based on specific evidence or client data, the structure may be appropriate. The test: Is the "most people think" based on real quotes from diagnostic calls, or is it a strawman the AI constructed? Real examples pass. Hypothetical corrections don't.

Example that passes: "Most advisory practices evaluate prospects by how the conversation feels." — This is an observed behavior, documented across multiple clients, supported by scoring data.

Example that fails: "Most people think they need better marketing." — This is a generic assumption about the reader. No evidence supports it.

Frequency limit: Maximum one pattern-reveal setup per post, even when the exception applies. Two in the same piece reads as lecture regardless of evidence quality.

Detection rule: "Most people/firms/owners think..." followed by a correction. "The common [wisdom/advice/approach] is..." followed by a counter. "Everyone assumes..." followed by "actually" or "the real [X]."

Related: Patterns 1, 3, 4 — same correction-revelation family.


Pattern 8: Dramatic Single-Word Beats — P3

What it is: Single words or very short fragments on their own line for dramatic effect.

Examples caught in drafts:

Why it's a problem: Sparingly, this works. But AI overuses it. When every section ends with a dramatic fragment, the technique loses power and starts reading as affectation.

How to use it correctly: One dramatic beat per post, maximum. It should be the single most important line. If you've already used this technique once in a piece, the next one gets cut.

Detection rule: More than one instance of a single-word or sub-five-word sentence used for dramatic emphasis in the same piece. The second one goes.


Pattern 9: Rhetorical Hand-Holding — P3

What it is: Phrases that guide the reader's reaction instead of trusting them to have it.

Examples caught in drafts:

Why it's a problem: It tells the reader what to feel instead of letting the content do the work. It's the written equivalent of elbowing someone during a movie.

How to fix: Delete the phrase. If the preceding line is strong, it doesn't need the nudge. If it's not strong enough without the nudge, rewrite the line.

Detection rule: Any phrase that exists solely to direct the reader's emotional response. "Right?" "See what I mean?" "Here's what most people miss." "Think about it."


Pattern 10: The Trying-to-Be-Quotable Line — P2

What it is: A sentence that's clearly been crafted to be screenshot-worthy or repeatable, at the expense of sounding natural.

Examples caught in drafts:

Why it's a problem: When a line is trying to be an aphorism, it usually sounds like one — and aphorisms from AI sound like fortune cookies. The best quotable lines are ones that describe something specific and concrete, not abstract metaphors.

How to tell the difference: Read the line out loud. Would the writer say this in a real conversation? "Your track record resets to zero with every new prospect" — yes, she might say that to someone in a diagnostic call. "Effort and value live at different addresses" — no one talks like that.

Detection rule: Any sentence that uses metaphor or abstraction to make a point that would be stronger stated plainly. Especially: sentences with personification ("value lives..."), extended metaphors, or wordplay that sacrifices clarity for cleverness.


Pattern 11: Identical Sentence Openers — P3

What it is: Multiple consecutive sentences or paragraphs starting with the same word.

Examples caught in drafts:

Why it's a problem: Anaphora (intentional repetition of opening words) is a valid rhetorical device, but AI defaults to it constantly. When four sentences in a row start with "Did you," it reads as a writing exercise, not a conversation.

How to fix: Vary the openers. Change the subject, change the sentence structure, start mid-thought.

Exception — deliberate data patterns: Same as Pattern 2: when identical openers demonstrate a scoring pattern or data cluster ("2s become... 2s become... 2s become..."), the repetition serves the proof. Keep when the repetition IS the insight.

Detection rule: Three or more consecutive sentences starting with the same word or phrase. Two is fine. Three is a pattern.


The Compound Check

Individual patterns are easy to catch. The harder problem is when a piece passes on each individual pattern but still feels AI-generated. That happens when:

Multiple mild instances compound. One twinning sentence + one three-beat list + one dramatic fragment = a piece that reads as AI even though no single line is egregious. If you flag 3+ P3 violations in the same piece, treat the compound as P1 — something structural needs to change.

The cadence is too even. AI produces sentences of similar length with similar rhythm. Human writing has jagged edges — a long sentence followed by a short one, then a medium one, then two short ones. If a piece feels metronomic, vary the sentence lengths even if no specific pattern triggers.

Every insight is positioned as a correction. If the piece follows a repeating structure of [common belief] → [actual truth], the reader feels lectured even if each individual reframe is well-written. Mix in observations, stories, and direct statements that don't follow the correction structure. Maximum one correction-revelation per piece (see Pattern 7 frequency limit).

Correction-revelation family check: Patterns 1, 3, 4, and 7 are all variations of the same underlying move: AI correcting a belief. If ANY two patterns from this family appear in the same piece, flag it as compound P1 regardless of individual severity.

The "read it out loud" test. Read the complete piece out loud. Every line that makes you slow down, pause awkwardly, or shift into a "presentation voice" is probably AI patterning. Natural writing reads at conversation speed.

The narrative trackability check. AI generates plausible-sounding examples and stitches them together without checking whether a reader can follow the thread. This shows up as: multiple characters with ambiguous pronouns, unexplained number shifts between examples, and scene transitions with no bridge. The patterns above catch AI language. This catches AI storytelling.

Specifically:

This check applies to any piece with characters, stories, or multiple examples — LinkedIn posts, articles, emails, briefing scripts, Substack notes.


QC Checklist

Run after every draft, before shipping:

P1 — Must Fix

P2 — Rewrite Unless Exception Applies

P3 — Fix If Multiple

Piece-Level Checks

The Final Question

If any P1 fails, rewrite before shipping. If any P2 fails without a valid exception, rewrite before shipping. If 2+ P3s fail, rewrite at least one. "I'll fix it later" is how bad defaults propagate.