What is a good cold email open rate in 2026?
Across 50,000+ B2B cold email campaigns, here is the realistic distribution:
| Open rate | Verdict | |---|---| | Under 30% | Deliverability problem — landing in spam, list quality is poor, or sender reputation is broken | | 30 to 45% | Below average — copy or list issues, but not technical | | 45 to 60% | Average for a well-run campaign | | 60 to 75% | Top decile — verified list, strong subject line, warmed-up domain | | Above 75% | Either Apple Mail Privacy Protection inflating, or pixel-fired by mailbox prefetch |
If your open rate is over 75% and your reply rate is under 1%, you are not actually getting opens — you are getting bots and prefetchers. Reply rate is the truth.
Why Apple Mail Privacy Protection broke the metric
Since iOS 15 (September 2021), Apple Mail Privacy Protection (MPP) pre-loads tracking pixels on Apple's servers regardless of whether the user actually opened the email. About 35 to 50% of B2B recipients use Apple Mail on iPhone or Mac, which means roughly that fraction of your "opens" are not real.
MPP affects: - Personal Apple Mail clients (iPhone, iPad, Mac) - Some Outlook for Mac configurations - Any client that respects the new privacy headers
It does not affect: - Web Gmail - Outlook Web - Desktop Outlook on Windows - Most enterprise mail clients
Net effect: open rates inflated by 15 to 30 percentage points compared to actual eyeballs.
What to track instead
In 2026, the metrics that matter are:
1. Reply rate (number of human replies / number sent). The only metric that survives MPP, prefetchers, and bot opens. 2. Bounce rate (bounces / sent). Should stay under 2%; over 5% triggers account-pause from your ESP. 3. Spam complaints (complaints / sent). Should be near zero. Anything above 0.1% is a domain-burn signal. 4. Click rate (only if you have links — but the best cold emails do not have links in the first send).
Open rate is still a useful directional signal for subject-line tests, but only as a relative comparison between A and B in the same campaign.
The 5 things that actually move open rate
1. Sender domain reputation. A warmed-up sending domain with 30+ days of clean activity opens 15 to 25 points higher than a fresh domain. Module 3 of the FoxReach Academy covers warmup in depth.
2. Subject line pattern. Lowercase questions outperform sentence-case statements by 8 to 12 points. Personal tokens (first name only) lift another 8 to 14 points. See the cold email subject lines guide for 40 tested examples.
3. Send time. Tuesday through Thursday, 9 to 11am or 1 to 3pm in the recipient's timezone. Sending at 6am or 10pm cuts open rate by 12 to 18 points because the email is buried by the time the recipient opens their inbox.
4. List quality. A verified list (NeverBounce / ZeroBounce / MillionVerifier with risky and invalid removed) opens 15 to 25 points higher than an unverified one. Verification cleans both deliverability and engagement signals.
5. Volume per sender. Sending 30 to 40 cold emails per inbox per day keeps reputation healthy. Pushing to 100+ per inbox burns the domain and crashes opens within 2 weeks.
Open rate by campaign stage
In a multi-step sequence, open rate compounds across steps. Typical pattern:
- Email 1: 60% open rate
- Email 2 (4 days later, threaded): 45% — same recipients, second touch
- Email 3 (9 days later): 35%
- Email 4 (breakup): 30%
The drop-off is normal and expected. Reply rate, by contrast, often peaks at email 3 or 4 (the breakup) — which is why follow-ups matter so much more than chasing email 1 opens.
Common open-rate myths
"Higher is always better." Not always. A 90% open rate on a tiny verified list with 0% replies is worse than a 50% open rate on a real list with 5% replies.
"Personalization tokens lift open rate." Only the first name, only in the subject line. Other tokens (company, role) get pattern-flagged.
"Image-rich emails have higher engagement." False for cold email. Image-heavy emails trigger spam filters and tank deliverability — open rate drops 15 to 25 points.
"You need to A/B test subject lines weekly." False. A real lift detection requires 100+ sends per arm. If you send 200 emails a week, one A/B test takes a week to power. Quarterly testing is what most teams realistically achieve.

