Your clients are not running your emails through an AI detector. They do not need to.
The patterns are visible to anyone who reads a lot of email, and most professionals read a lot of email. Here are the seven that reliably surface in AI-assisted correspondence, along with what each one signals about how the email was written.
1. The em-dash epidemic
The em-dash did not appear in professional email at scale before 2023. It is technically correct punctuation, but it was rare in casual business writing for the same reason semicolons are rare: most people do not punctuate that way when typing quickly.
ChatGPT and Claude use em-dashes constantly. Their training data skews toward edited prose where em-dashes are common; their outputs normalize them. The result is emails that use em-dashes in places where the actual human would have used a comma, a period, or nothing at all.
The tell: Three or more em-dashes in a single email, especially in casual correspondence where you would normally write informally.
The fix: If you are prompting ChatGPT, explicitly forbid em-dashes: "Never use em-dashes. Use commas or sentence breaks instead." A good persona prompt handles this automatically.
2. "I hope this email finds you well"
This opener has existed in email for decades, but its frequency has exploded since 2022. It is now so associated with AI-generated email that many recipients read it as a signal that the rest of the email was also not written by a human.
Variants include: "I hope you're doing well," "I trust this email finds you well," and the especially egregious "I hope this message finds you in good health."
The tell: Any pleasantry opener where the writer could have just started the email.
The fix: Start with the point. "Quick question about the proposal" is better than two sentences of throat-clearing. Real emails from busy people do not warm up.
3. The summary-then-detail structure
Humans writing emails under time pressure start with whatever is most important to them. AI tools, trained to be helpful and clear, front-load a structured summary: "I am writing to follow up on our conversation from last Tuesday about the Q3 deliverables. As I mentioned, there are three items I wanted to address..."
This is technically good communication structure. It is also the structure of a document, not a message. Actual rushed emails jump in. Actual long emails might structure, but they do so in the writer's idiosyncratic way, not the tidy "I am writing to..." pattern.
The tell: Emails that explain what they are going to say before they say it.
The fix: Write the first sentence as if you already said the preamble.
4. Bullet points in casual email
Bullet points are useful for lists. But AI tools reach for bullets constantly, including in contexts where the writer would have just written a sentence.
"There are a few things I wanted to cover:
- The timeline update
- The budget question
- Next steps"
A real person who writes 40 emails a day would have written: "Quick update: timeline is slipping two weeks, budget question still open, I'll send next steps after the call."
The tell: Three-item bullet lists in emails that cover fewer than 50 words of content.
The fix: If the list has fewer than four items, write it as a sentence. Bullets should feel like a service to the reader, not a crutch for the writer.
5. "Please let me know if you have any questions"
This closer is now so common in AI output that it functions as a watermark. It is technically harmless, but it has been sanded so smooth by overuse that it conveys nothing.
The tell: Any closer that could appear in a form letter.
The fix: End with the actual next action. "Talk Tuesday" is better than "Please let me know if you have any questions." "Call me if this is unclear" is better. If there is genuinely no next action, "Thanks" + your name is sufficient.
6. Overly balanced framing
AI tools are trained to be balanced and non-committal. They qualify, they acknowledge complexity, they present multiple sides. This is often appropriate. In email, it produces a specific pattern: statements that hedge before they land.
"While I understand that there may be some concerns with the current approach, I believe that, with the right adjustments, we could potentially achieve the desired outcome."
A human with a point of view writes: "I think we should change the approach. Here's why."
The tell: Any sentence with three qualifying clauses before the actual claim.
The fix: Take a position. Your recipients will respect you more for it.
7. The unnecessarily comprehensive sign-off
"Thank you for your time and consideration. I look forward to our continued collaboration and hope we can find a mutually beneficial path forward. Best regards, [Full Name]"
This is the AI equivalent of a curtsy. It is overly formal for any correspondence where you have previously used the word "hey" with this person.
The tell: Sign-offs that are longer than the email's topic sentence.
The fix: Match the sign-off energy to the relationship. "Thanks" for most things. First name only for warm contacts. "Best" as a floor, not a default.
Why this keeps happening
Every one of these patterns is a symptom of the same underlying problem: you are asking an AI to impersonate a generic professional, not you specifically.
When you use a generic prompt ("write a professional email"), the AI produces statistically average professional email. The tells above are the statistical fingerprints of that average.
The alternative is a prompt that tells the AI how you specifically write, not how a generic professional writes. That prompt has to be built from evidence (your actual sent emails) or from a careful, specific interview about your style.
The FinalDraft Persona Prompt Generator does the second thing: it walks you through the structural patterns of your writing and produces a first-person prompt that captures your specifics. The result does not produce generic professional email. It produces email that sounds like you wrote it, because the prompt is built around how you actually write, not how professionals generally write.