No one is running your emails through GPT-Zero. They do not need to.
The detection happens faster than that. It is pattern recognition at the level of someone who has read 10,000 emails and can tell, in about three seconds, whether this one was written under time pressure or drafted by a language model.
Here is what they are noticing, and why it is harder to fix than you think.
The uncanny valley of professional prose
Humans are extraordinarily good at detecting effort. We read quickly and unconsciously track things like: did this person choose these words, or did they flow from a template? Is the informality here real or performed? Does the structure of this email reflect how a busy person would have actually written it?
AI email exists in an uncanny valley. It is almost always grammatically superior to unassisted human email. That is the tell. Your most important client, your best prospect, your closest colleague: they have all seen your unassisted email. It has typos. It has half-finished sentences. It has the specific rhythm of someone who types fast and does not proof-read the third paragraph.
An email that is 40% more polished than your average, with no typos, a clean three-paragraph structure, and a crisp subject line, is an email that does not match the person they know.
The three detection mechanisms
1. Pattern recognition for AI tells
As covered in our piece on the 7 phrases that signal AI email, there are specific linguistic markers that have become strongly associated with AI output. Em-dashes. "I hope this email finds you well." Over-structured sign-offs. The hedge-before-claim sentence pattern.
Frequent email readers have been implicitly trained on these patterns by reading hundreds of AI-assisted emails over the past two years. The association is now strong enough that the presence of any two or three of these markers registers as AI-likely.
2. Voice inconsistency
This is the more interesting one. If you have a genuine relationship with the person you are emailing, they have a mental model of how you write. They might not be able to articulate it, but they know it. They know whether you use their first name or a greeting. They know whether you get to the point in sentence one or build up to it. They know whether your emails to them are long or short, formal or casual, structured or stream-of-consciousness.
An AI-drafted email that does not match this model feels wrong before the recipient can say why. The email might be technically better than your unassisted writing. It is still wrong for the relationship.
3. Context and relationship blindness
AI tools do not know the history of your relationship with this person. They do not know that you always reference the last call. They do not know that this particular contact finds bullet-point emails impersonal. They do not know that you have an inside reference that has appeared in every third email between you for two years.
The email is competent but sterile. Relationships are not sterile.
Why prompting cannot fully solve this
If you add all of this context to your ChatGPT prompt (your voice description, the relationship history, the specific contact's preferences, the shared references), two problems emerge:
First, you are spending more time writing the prompt brief than you would have spent writing the email. The efficiency gain disappears.
Second, the AI still does not have your actual writing as evidence. It has your description of your writing, which is different. A musician asked to describe their playing style will give you a description that sounds like the description of a thousand other musicians. The playing is the thing.
What actually reduces detection
Two things materially reduce AI detection risk in email:
1. Voice-trained prompts over generic style descriptions. A prompt built from your actual sent email history will produce outputs that carry your specific patterns, not the average of professional email. Your sentence length distribution. Your specific opener habits. Your actual closer vocabulary. These are the things that let a first-person persona prompt pass as you.
2. Context injection. Manually adding relationship-specific context before generating ("we talked Tuesday about the Acme proposal, she prefers short emails, we have a casual relationship after three years of working together") is tedious but effective. FinalDraft does this automatically inside Gmail and Outlook by reading your thread history before drafting.
The goal is not to hide that you use AI. The goal is to produce email that is genuinely, specifically yours, just faster. The detection problem solves itself when the output actually matches your voice.
Build a persona prompt that captures how you actually write →