Copy QC — AI Pattern Detection
What This Does
Scans written content for AI-generated language patterns before publishing. This is a quality gate, not a style guide. Every pattern listed here has been caught in actual Advisory OS drafts and corrected by hand.
Run this against: LinkedIn posts, hand-raiser posts, DM sequences, emails, Substack Notes, tool copy, campaign briefs — any written content before it ships.
Companion files: Read reference/core/voice.md before running QC. The final test for every flagged line is: would the writer say this in a real conversation?
Severity Levels
| Level | Meaning | Action |
|---|---|---|
| P1 | Structural AI tell — readers will clock this as AI-generated | Must rewrite before shipping |
| P2 | Common AI pattern — weakens credibility on repeated exposure | Rewrite unless exception applies |
| P3 | Style concern — individually minor, compounds with others | Rewrite if 2+ P3s appear in the same piece |
How to Use
After drafting any written content, run this QC pass. For each pattern, scan the draft line by line. If you find a match, rewrite the line before shipping. Do not ship content that fails P1 or P2 checks.
The patterns are grouped by type. Each pattern includes what it looks like, why it's a problem, and how to fix it.
Pattern 1: Twinning — P1
What it is: Two sentences with mirrored structure where the second negates or corrects the first.
Examples caught in drafts:
- "You don't have a case study problem. You have an extraction problem."
- "You didn't lose on expertise. You lost on evidence."
- "You don't need a dramatic transformation. You need one real outcome."
- "It's not a sales problem. It's an architecture problem."
Why it's a problem: This is the single most common AI writing pattern. It sounds clever on first read and hollow on second. Humans don't naturally correct themselves in mirrored syntax. It reads as a copywriter performing insight.
How to fix: Fold the reframe into a single declarative sentence. Or just state the second half — the first half (the wrong belief) doesn't need to be named if the right framing is strong enough.
- Before: "You don't have a case study problem. You have an extraction problem."
- After: "That's an extraction problem, not a content problem."
- Or: "The outcomes exist. They just haven't been extracted."
Detection rule: Any two consecutive sentences where one says "not X" and the next says "Y" using the same sentence structure. Also catches "It's not about X. It's about Y." and "The problem isn't X. The problem is Y."
Related: Patterns 3, 4, 7 — all variations of AI's correction-revelation structure. If multiple patterns from this family appear in the same piece, the compound effect is severe even if each individual instance is mild.
Pattern 2: Three-Beat Parallel Lists — P2
What it is: Three items in a row with identical syntactic structure. Often used to build rhythm toward a point.
Examples caught in drafts:
- "Never tracked. Never noticed. Never tested."
- "You solved the problem, moved on, took the next call."
- "The client who renewed without being asked. The referral that showed up unprompted. The expansion that happened because trust was already built."
- "More posts, more content, more outreach."
Why it's a problem: Three-beat rhythm is the signature of AI-generated prose. Humans vary their sentence structure naturally. When three items appear with identical construction, the writing sounds performed rather than spoken.
How to fix: Vary the length and structure of each item. Make one short, one longer, one that breaks the pattern. Or cut to two items. Or fold them into a single sentence.
- Before: "The client who renewed without being asked. The referral that showed up unprompted. The expansion that happened because trust was already built."
- After: "The client who almost left and didn't. Or the quiet expansion nobody announced. Or the referral that showed up because someone trusted you enough to stake their own reputation on it."
Note how the fix varies length (short, medium, long) and structure (different phrasing, different emphasis) across the three items.
Exception — deliberate data patterns: When the parallel structure demonstrates a literal repeating pattern from a scoring system, data set, or observed sequence, the repetition IS the mechanism. Example: "2s become scope creep. 2s become payment chasing. 2s become 'I'm disappointed in the partnership.'" — the repetition of "2s become" reinforces the scoring pattern. Keep when the repetition serves the proof; vary when it's decorative.
Detection rule: Three consecutive phrases, clauses, or sentences with the same grammatical structure. Especially: three items starting with the same word or part of speech, or three items with identical [article] + [noun] + [relative clause] form.
Pattern 3: The Mirror Reversal — P1
What it is: A sentence where the second half reverses or mirrors the first half using the same key words.
Examples caught in drafts:
- "They don't need more visibility. They need to make the work they've already done visible."
- "Effort and value live at different addresses."
- "Their call starts at the problem. Yours starts at the résumé."
Why it's a problem: It's wordplay posing as insight. The reversal creates the feeling of a realization, but it's a structural trick — swap two words and call it wisdom. Readers feel manipulated even when they can't name why.
How to fix: Say what you mean without the mirror. If the insight is real, it doesn't need the wordplay to land.
- Before: "They don't need more visibility. They need to make the work they've already done visible."
- After: "The work is done. Nobody outside the practice can see it."
Exception: Mirror phrases that describe an actual observed contrast, not a rhetorical one. "Their call starts at the problem. Yours starts at the résumé." was kept because it describes two real, different experiences. The test: would the writer say this on a call to describe something they've actually seen? If yes, it might be fine. If it only works as a written line, kill it.
Detection rule: Two clauses or sentences that reuse the same root word in reversed positions. "Visibility/visible," "start at X / start at Y," "build the work / work the build."
Related: Patterns 1, 4, 7 — same correction-revelation family.
Pattern 4: Not Because X. Because Y. — P2
What it is: A negation followed by a correction, presented as revelation.
Examples caught in drafts:
- "Not because you're bad at marketing. Because the outcome felt routine from the inside."
- "Not because they're lazy. Because building feels slower than doing."
- "Not because they couldn't pay — they had the money."
Why it's a problem: This construction implies the writer is correcting the reader's assumption, which is presumptuous. It also follows a predictable AI cadence: deny the obvious, reveal the hidden. Humans say "the reason is..." or just state the cause.
How to fix: Drop the negation. Just state the actual reason.
- Before: "Not because you're bad at marketing. Because the outcome felt routine."
- After: "The outcome felt routine. You solved it and moved on."
Detection rule: Sentence starting with "Not because" followed by a sentence starting with "Because" or "But because."
Related: Patterns 1, 3, 7 — same correction-revelation family.
Pattern 5: The Question → Revelation Arc — P1
What it is: A story beat where the writer asks a question (often in dialogue), pauses for effect, then delivers the insight as if it just occurred.
Examples caught in drafts:
- "I asked one question: [question]. He went quiet. Not because [obvious]. Because [insight]."
- "Think about it: [setup]. Now think about this: [reveal]."
Why it's a problem: It reads like screenplay stage directions. Real conversations don't have narrative pauses where someone "goes quiet" before the writer delivers the punchline. The structure is AI mimicking TED talk choreography.
How to fix: Report the observation without directing the scene. If someone's reaction matters, describe what they said or did — not the dramatic silence.
- Before: "I asked: 'When did a client last see proof of what you did for them?' He went quiet."
- After: "Most practice owners can't point to a single documented outcome from the last year."
Detection rule: "He/she/they went quiet" or "went silent" or "paused." Any story beat that narrates a dramatic silence followed by a revelation.
Pattern 6: Over-Validation in DMs — P2
What it is: Responding to someone's answer by telling them how insightful or significant their answer was.
Examples caught in drafts:
- "That's exactly the kind of outcome most practice owners overlook."
- "That's a powerful example of how real outcomes hide in plain sight."
- "Great question — most people don't think to ask that."
Why it's a problem: It's performative encouragement. The person said something normal and you're treating it like they had a breakthrough. It reads as condescending and fake — like an AI that's been trained to validate.
How to fix: Respond to the substance of what they said, not to the quality of their answer. Be direct and useful, not praising.
- Before: "That's exactly the kind of outcome most practice owners overlook because it felt routine."
- After: "That one is worth looking at closely."
Detection rule: DM responses starting with "That's exactly..." or "That's a great..." or "What a..." Any sentence that evaluates the quality of the other person's response before addressing its content.
Pattern 7: The Formulaic Setup — P2
What it is: "Most people think X. The real thing is Y." Used to create a false before/after in the reader's mind.
Examples caught in drafts:
- "Most people think they need better marketing. What they actually need is documented proof."
- "Everyone assumes the problem is visibility. The real problem is extraction."
- "The common advice is X. Here's what actually works."
Why it's a problem: It positions the writer as the person who sees what everyone else misses. That's fine once — but AI uses this as a default structure for every insight. After two or three of these in a single piece, the reader feels lectured.
How to fix: State the insight directly. If it's genuinely counterintuitive, the reader will feel the surprise without you scaffolding it with "most people think."
- Before: "Most people think they need more case studies. What they actually need is a system for extracting the outcomes they already have."
- After: "The outcomes already exist. They've just never been extracted."
Exception — genuine pattern reveals: When the "common belief" is something the writer has personally observed clients saying or doing (not an assumed reader belief), and the correction is based on specific evidence or client data, the structure may be appropriate. The test: Is the "most people think" based on real quotes from diagnostic calls, or is it a strawman the AI constructed? Real examples pass. Hypothetical corrections don't.
Example that passes: "Most advisory practices evaluate prospects by how the conversation feels." — This is an observed behavior, documented across multiple clients, supported by scoring data.
Example that fails: "Most people think they need better marketing." — This is a generic assumption about the reader. No evidence supports it.
Frequency limit: Maximum one pattern-reveal setup per post, even when the exception applies. Two in the same piece reads as lecture regardless of evidence quality.
Detection rule: "Most people/firms/owners think..." followed by a correction. "The common [wisdom/advice/approach] is..." followed by a counter. "Everyone assumes..." followed by "actually" or "the real [X]."
Related: Patterns 1, 3, 4 — same correction-revelation family.
Pattern 8: Dramatic Single-Word Beats — P3
What it is: Single words or very short fragments on their own line for dramatic effect.
Examples caught in drafts:
- "Per month. Every month. Forever."
- "Gone."
- "That's the gap."
Why it's a problem: Sparingly, this works. But AI overuses it. When every section ends with a dramatic fragment, the technique loses power and starts reading as affectation.
How to use it correctly: One dramatic beat per post, maximum. It should be the single most important line. If you've already used this technique once in a piece, the next one gets cut.
Detection rule: More than one instance of a single-word or sub-five-word sentence used for dramatic emphasis in the same piece. The second one goes.
Pattern 9: Rhetorical Hand-Holding — P3
What it is: Phrases that guide the reader's reaction instead of trusting them to have it.
Examples caught in drafts:
- "Logical, right?"
- "Sound familiar?"
- "Here's the thing."
- "Think about that for a second."
- "Let that sink in."
Why it's a problem: It tells the reader what to feel instead of letting the content do the work. It's the written equivalent of elbowing someone during a movie.
How to fix: Delete the phrase. If the preceding line is strong, it doesn't need the nudge. If it's not strong enough without the nudge, rewrite the line.
Detection rule: Any phrase that exists solely to direct the reader's emotional response. "Right?" "See what I mean?" "Here's what most people miss." "Think about it."
Pattern 10: The Trying-to-Be-Quotable Line — P2
What it is: A sentence that's clearly been crafted to be screenshot-worthy or repeatable, at the expense of sounding natural.
Examples caught in drafts:
- "Effort and value live at different addresses."
- "Your track record resets to zero with every new prospect."
- "One case study is a competitive advantage in a world where nobody has proof."
Why it's a problem: When a line is trying to be an aphorism, it usually sounds like one — and aphorisms from AI sound like fortune cookies. The best quotable lines are ones that describe something specific and concrete, not abstract metaphors.
How to tell the difference: Read the line out loud. Would the writer say this in a real conversation? "Your track record resets to zero with every new prospect" — yes, she might say that to someone in a diagnostic call. "Effort and value live at different addresses" — no one talks like that.
Detection rule: Any sentence that uses metaphor or abstraction to make a point that would be stronger stated plainly. Especially: sentences with personification ("value lives..."), extended metaphors, or wordplay that sacrifices clarity for cleverness.
Pattern 11: Identical Sentence Openers — P3
What it is: Multiple consecutive sentences or paragraphs starting with the same word.
Examples caught in drafts:
- "Did a client renew... Did someone refer... Did you help... Did a client tell you..."
- "They see a services page. They hear your credentials. They compare you to..."
Why it's a problem: Anaphora (intentional repetition of opening words) is a valid rhetorical device, but AI defaults to it constantly. When four sentences in a row start with "Did you," it reads as a writing exercise, not a conversation.
How to fix: Vary the openers. Change the subject, change the sentence structure, start mid-thought.
- Before: "Did a client renew? Did someone refer? Did you help? Did a client tell you?"
- After: "A client renewed without you making the case for it. Someone referred a prospect — unprompted. You helped a client avoid a mistake they didn't see coming. A client told you something like 'I don't know what we'd do without you.'"
Exception — deliberate data patterns: Same as Pattern 2: when identical openers demonstrate a scoring pattern or data cluster ("2s become... 2s become... 2s become..."), the repetition serves the proof. Keep when the repetition IS the insight.
Detection rule: Three or more consecutive sentences starting with the same word or phrase. Two is fine. Three is a pattern.
The Compound Check
Individual patterns are easy to catch. The harder problem is when a piece passes on each individual pattern but still feels AI-generated. That happens when:
Multiple mild instances compound. One twinning sentence + one three-beat list + one dramatic fragment = a piece that reads as AI even though no single line is egregious. If you flag 3+ P3 violations in the same piece, treat the compound as P1 — something structural needs to change.
The cadence is too even. AI produces sentences of similar length with similar rhythm. Human writing has jagged edges — a long sentence followed by a short one, then a medium one, then two short ones. If a piece feels metronomic, vary the sentence lengths even if no specific pattern triggers.
Every insight is positioned as a correction. If the piece follows a repeating structure of [common belief] → [actual truth], the reader feels lectured even if each individual reframe is well-written. Mix in observations, stories, and direct statements that don't follow the correction structure. Maximum one correction-revelation per piece (see Pattern 7 frequency limit).
Correction-revelation family check: Patterns 1, 3, 4, and 7 are all variations of the same underlying move: AI correcting a belief. If ANY two patterns from this family appear in the same piece, flag it as compound P1 regardless of individual severity.
The "read it out loud" test. Read the complete piece out loud. Every line that makes you slow down, pause awkwardly, or shift into a "presentation voice" is probably AI patterning. Natural writing reads at conversation speed.
The narrative trackability check. AI generates plausible-sounding examples and stitches them together without checking whether a reader can follow the thread. This shows up as: multiple characters with ambiguous pronouns, unexplained number shifts between examples, and scene transitions with no bridge. The patterns above catch AI language. This catches AI storytelling.
Specifically:
- Pronoun confusion. Every "he," "she," or "they" must trace to one clearly established character within two sentences. If the reader has to scroll back to figure out who a pronoun refers to, the piece fails. Especially dangerous when two characters share a gender or role — "She was confident" means nothing if the previous paragraph also had a "she."
- Unannounced scene switches. When a piece moves from one example or character to another, the transition must be explicit. A new paragraph that introduces a new "he" or "she" without signaling the shift reads as the same story continuing — until it doesn't, and the reader is lost.
- Drifting numbers. AI generates plausible math for each example independently. One section says $28,800, another says $36,000, a bridge paragraph mixes both without explaining the difference. Every number in the piece must be internally consistent, and when two different calculations appear, the reader must know which scenario produced which number.
- Thread loss. At any point in the piece, pause and ask: does the reader know which character they're following and which scenario they're in? If they'd need to re-read to answer that, the narrative is broken.
This check applies to any piece with characters, stories, or multiple examples — LinkedIn posts, articles, emails, briefing scripts, Substack notes.
QC Checklist
Run after every draft, before shipping:
P1 — Must Fix
- [ ] No twinning ("You don't have X. You have Y")
- [ ] No mirror reversals (same word in reversed positions)
- [ ] No question → dramatic silence → revelation arcs
- [ ] No correction-revelation family compounds (2+ from Patterns 1/3/4/7)
P2 — Rewrite Unless Exception Applies
- [ ] No three-beat parallel lists (unless demonstrating literal data pattern)
- [ ] No "Not because X. Because Y." constructions
- [ ] No over-validation in DMs ("That's exactly the kind of...")
- [ ] No formulaic setups ("Most people think X. Actually Y.") — max 1 per piece with evidence
- [ ] No trying-to-be-quotable lines (metaphor over clarity)
P3 — Fix If Multiple
- [ ] No more than one dramatic single-word beat per piece
- [ ] No rhetorical hand-holding ("Right?" "Sound familiar?" "Let that sink in.")
- [ ] No three+ consecutive sentences starting with the same word
Piece-Level Checks
- [ ] No compound accumulation (multiple mild patterns creating AI feel)
- [ ] Sentence lengths vary (no metronomic rhythm)
- [ ] Not every insight structured as correction of common belief
- [ ] Passes the read-aloud test at conversation speed
- [ ] Every pronoun traceable to one established character within two sentences
- [ ] Scene/example transitions are explicit (no unannounced character switches)
- [ ] Numbers consistent across the piece (no unexplained shifts between figures)
- [ ] Thread test: reader knows who and what at every paragraph without re-reading
The Final Question
- [ ] Would the writer say every line of this in a real conversation?
If any P1 fails, rewrite before shipping. If any P2 fails without a valid exception, rewrite before shipping. If 2+ P3s fail, rewrite at least one. "I'll fix it later" is how bad defaults propagate.