CustomerGauge
White paper · 2026
Account experience research
An analysis of more than 50 B2B feedback programs

Why customers open survey emails — but rarely click.

The hidden bottleneck in B2B customer feedback, and what the best programs do differently.

CustomerGauge research · Based on more than 5 million survey emails sent across our B2B client base during 2025
In one paragraph

If your response rate is falling, the problem isn't your inbox — it's the moment after the email is opened.

Across more than 50 CustomerGauge clients in 2025, larger senders had response rates that were less than half of smaller ones. The cause wasn't deliverability. It wasn't subject lines. It wasn't survey fatigue at the finish line. The single biggest difference was the conversion from email opened to survey clicked — recipients open the email, look at it, and never proceed. The clients who break this pattern share one trait: a visible culture of closing the loop with their own customers.

The puzzle

Two clients run identical Net Promoter programs. One gets a 26% response rate. The other gets 3%. Why?

This is the question we set out to answer when we examined the email send data for every CustomerGauge client active during 2025 — more than 5 million survey invitations across more than 50 B2B programs ranging from a few hundred contacts to over a million. The pattern that emerged is consistent enough to act on, surprising enough that most teams aren't acting on it, and small enough to fix.

We started where most marketers would. Did large senders have worse deliverability? No — delivery rates are roughly flat across volume tiers, with infrastructure problems concentrated in a small handful of specific clients regardless of size. Did they have worse open rates? Mildly — but the relationship is statistically weak and the medians are close. Were recipients abandoning the survey after starting it? No — completion rates are remarkably stable at around 85% across the board, regardless of program size.

None of those is the story.

Recipients open the email, look at it, and quietly close the tab. The survey itself is never seen.

The dominant pattern is this: as a sender's volume grows, a smaller and smaller fraction of recipients who open the email go on to click the survey link. The email is reaching them. They're recognising it. They're choosing not to proceed. And because they never click, they never appear in the survey-completion data — so the problem is invisible to teams that only track responses and abandonment.

The size of the gap

The contrast is sharper than we expected. Below are the median figures from each volume tier (low-volume clients send roughly two thousand survey emails per year; high-volume clients send well over a hundred thousand).

54%
Open-to-click rate at low-volume clients (median)
22%
Open-to-click rate at high-volume clients (median)
2.5×
Difference in click-through, despite similar open rates

Put differently: at a small B2B account, half the people who open the email proceed to the survey. At a large account, only one in five. The recipients are looking at the same product asking the same questions. Something else is changing.

Where the audience is lost

To make this concrete, here is what happens to a thousand survey emails at each volume tier. The grey block is people who never opened the email at all. The orange block — the part of the funnel we want you to focus on — is people who opened the email but never clicked through to the survey.

For every 1,000 emails delivered: where the audience goes
The orange band — opened-but-didn't-click — is the segment that grows fastest as programs scale. Notice that the green band (completed) is roughly the same shape across all three tiers, and the dark grey band (didn't open) actually shrinks at high volume.
Didn't open Opened, didn't click Clicked, didn't complete Completed
Source: CustomerGauge email analytics, 2025. Aggregate counts per volume band, more than 50 client programs, ~5M+ emails.

Two observations from this chart matter most.

The orange band grows from 221 to 326 per thousand as we move from small to large programs. That single shift accounts for almost all of the response-rate gap between volume tiers. Open-rate differences are modest. Survey-completion differences are negligible. The whole story is concentrated in the moment between opening the email and choosing to click.

Survey completion is not the problem. Once recipients click through, roughly 85% of them complete — at every volume tier. Survey design, length, and mobile experience are largely solved problems for clients who get this far. The bottleneck has moved upstream, into the email itself.

The pattern, client by client

The next chart plots every client as a single dot. The horizontal axis is the volume of survey emails sent (on a logarithmic scale, so each step represents a tenfold increase). The vertical axis is the response rate. The dotted line is the trend.

Response rate falls as program volume rises
Each dot is one CustomerGauge client. The teal dots are high-volume clients who break the trend, each delivering 50,000+ emails while still achieving response rates of 16% or higher. There are four notable accounts in this group; their identities have been anonymised in the analysis that follows.
Client High-volume star Trend
Source: CustomerGauge email analytics, 2025. Clients with data integrity issues excluded.

The trend line is real but it is not destiny. Four clients sit visibly above it at high volume — each running programs that send between 58,000 and 500,000 emails per year, yet achieving response rates that match or exceed the average small-volume client. They are the proof that scale and engagement are not in conflict.

What the stars do differently

The four high-performing high-volume clients — anonymised here as Client FG, Client HQ, Client NV and Client HC — share something the data alone cannot prove, but customer-facing teams know in their bones: they run programs that visibly close the loop with their own customers.

These clients communicate results back. They reference NPS publicly in their own annual reports and customer communications. They have account managers who follow up on detractor scores within days. Their recipients open the survey email already understanding what the survey is for and what will happen with their answer. The email isn't a cold approach — it's the next step in a recognised conversation.

The discipline of closing the feedback loop with customers isn't soft theatre. It shows up as a 2.5× lift in the open-to-click step.

This is the most important practical finding in this paper, because it means the open-to-click bottleneck is fixable — not by writing a better email, but by building the program around it. When customers know the survey matters, they click. When they don't, they don't.

ClientEmails deliveredOpen → clickResponse rate
Client FG495,94556%26.2%
Client HQ58,35662%21.3%
Client NV83,12741%20.0%
Client HC316,59043%15.5%
High-volume median~150,00022%8.0%

The winning techniques of champions

We can't ask the best programs to give up their playbooks. But several have published enough about how they work that we can triangulate the techniques separating them from the field.

What follows is five practices we observe — directly in public disclosures, indirectly through the data — among the clients who break the volume-versus-engagement trend. None of them is a secret. None is technically difficult. The reason most programs don't do them isn't capability — it's discipline.

01

Emphasise response rate over the score

The clients with the highest response rates are noticeably uninterested in managing their NPS number. They focus instead on three operational measures upstream of it: account coverage, response rate, and close-the-loop discipline. Their working assumption is that if they get those three right, the score takes care of itself — and importantly, that the score is more honest if nobody is incentivised to move it directly.

This sounds counter-intuitive but it's the single most consistent signal of program maturity we see. The moment NPS becomes a target on someone's bonus, recipients start being nudged, surveys start getting timed for moments of strength, and the metric becomes decorative. Champions resist this.

Evidence in the wild Coca-Cola Hellenic's Head of Customer Capability has stated publicly that the company wants feedback "pure without any influence" — and explicitly does not manage to the score. NPS in their integrated annual report rose from 66 to 78 in 2025. They appear to have got there by not chasing it.
02

Commit to a close-the-loop time you'd be embarrassed to miss

Every program intends to close the loop. The champions are distinguished by publishing a number — internal or external — and treating any miss as a failure rather than a regrettable exception. Forty-eight hours is the benchmark we see most often among high performers. Anything beyond a week and the recipient has psychologically moved on; the loop doesn't close, it just expires.

The mechanism here is recipient-side, not internal. When customers know — because they have experienced it — that low scores generate a real conversation within days, they answer differently. They're more honest, more specific, and dramatically more likely to respond to the next survey because the previous one demonstrably mattered.

Benchmark Reported close-the-loop rate of 80%+ within 48 hours, achieved across a 15,000-strong distributed salesforce. The number is striking, but the more important point is that it's a published target — which means the organisation has accepted public accountability for hitting it.
03

Make the program always-on, not a campaign

The clients with the worst response rates almost universally run their feedback as periodic waves — quarterly, biannually, or annually. Recipients experience these as marketing events: a sudden burst of survey emails followed by silence. The mental model is one of intrusion.

Champions run continuous, transaction-triggered programs. Surveys go out a few days after a delivery, an installation, an order, a service interaction — events the recipient already remembers and has an opinion about. Volume on any given day is small; total volume per year is high; but each individual recipient's experience is of a single timely conversation, not a campaign.

What this looks like in practice One of our largest beverage clients explicitly moved away from annual market research to "always-on" feedback covering 29 countries. The shift wasn't primarily about data freshness — it was about changing how recipients experienced the program. Response rate followed.
04

Treat coverage as a discipline, with non-response as a signal

Standard practice is to send surveys, count responses, and report a rate. Champions treat coverage — the percentage of accounts you've heard from in a defined window — as a separate, owned metric. When an account hasn't responded, it isn't a missing data point; it's a signal that the relationship may be cooler than the revenue line suggests. Someone is responsible for re-approaching, finding the right contact, or investigating why the silence exists.

This is the technique most directly relevant to B2B account experience. In a world of 5,000-contact accounts, a 20% response rate isn't a sample — it's an account-coverage signal that the other 80% of stakeholders aren't engaging. The champions know this and act on it.

Benchmark Best-in-class B2B programs achieve active feedback from 50%+ of customers in their target population. The figure is meaningful only when paired with a process for chasing the missing voices, which is the part most teams skip.
05

Put the metric in the integrated annual report

The most powerful forcing function we observe in champion programs is also the simplest: the headline customer metric appears in the company's integrated or annual report, alongside revenue, margin, and ESG. Once a number is in that document, the CEO has to defend it to investors, the board reviews it quarterly, and the cost of letting the program drift becomes career-relevant rather than departmental.

This is not vanity. Programs that disclose externally consistently outperform programs that don't, because external disclosure changes who pays attention. The CFO suddenly cares. The investor relations team suddenly cares. The number gets a level of scrutiny that internal CX dashboards never attract.

Evidence in the wild Coca-Cola Hellenic discloses its NPS movement and names its feedback platform in its public integrated annual report. The specific year-on-year improvement (66 → 78) is a number the company has chosen to be measured against publicly. This is the highest level of program commitment we see in the dataset.

Notice what unifies all five. None of them is about the survey email itself. None is about subject lines, send times, or A/B-tested CTAs. Every technique works on the same upstream variable: the recipient's expectation that this survey is going to matter — that someone will read the answer, that something will happen as a result, and that the program is part of how the company actually operates rather than a thing the customer experience team does on the side.

That expectation, accumulated over time, is what converts opens into clicks. It is the only durable mechanism we have observed for sustaining a high response rate at high volume.

Five findings worth acting on

The open-to-click step is where programs succeed or fail at scale.

Across the dataset, this single conversion explains more of the variance in response rates than any other metric. Open rates are roughly equal at every volume. Completion rates are roughly equal at every volume. The middle step is where the gap appears.

Recipients who don't click never become abandoners — they become invisible.

This is why the bottleneck escapes attention in most CX dashboards. A recipient who closes the email tab without clicking is not "an abandoner" — they're nothing. They don't appear in the failed-survey numbers. The only place they show up is in the gap between opens and clicks, which most teams don't watch.

Volume itself is not the cause; program discipline is.

If volume directly damaged engagement, no large client could ever achieve high response rates. Four of them do. The volume effect we observe is really a discipline effect: in smaller programs, clients care more about each individual response and follow up accordingly. In larger programs, it is usually harder to enforce that same discipline at scale — generic templates creep in, close-the-loop muscle weakens, and recipients stop recognising the email as something they have a stake in answering. The champions are the ones who have maintained discipline at scale.

Survey design is not the limiting factor for most clients.

For 80% of the clients we examined, the survey itself is performing well — completion rates are healthy and abandonment is low. The leverage available from redesigning surveys is small compared with the leverage available from rebuilding the program around them.

Closing the loop is the single most powerful response-rate intervention available.

Programmes that demonstrate to customers that feedback leads to action — through public communication, account-manager follow-up, and visible inclusion of metrics in corporate reporting — measurably out-perform programs that don't, even at very large scale. This is the central practical recommendation of this paper.

What to do on Monday morning

If your program is in the position most are — high open rates, modest click-through, healthy completion among those who do click — your highest-leverage move is not to rewrite your survey or redesign your email. It is to make the next survey feel like the next step in a conversation your customers already know they are part of. Four practical levers, in order of effort.

Lever 01

Reference last cycle's results in the next email

A single line in the survey invitation referencing what changed because of the last round of feedback transforms a cold ask into a continuation. Recipients who recognise the connection click at materially higher rates.

Lever 02

Publish a customer-facing summary of program outcomes

Even a one-page annual write-up — what we heard, what we did, what's next — gives the survey email a context recipients can place. Heineken and the Coca-Cola bottlers do this in their annual reports.

Lever 03

Embed the first question directly in the email

A single-click opening question (a rating scale embedded in the email body) captures the cohort that almost responded. It also sends a signal that the survey is short, which is half the battle.

Lever 04

Send from a named human, not a generic address

For B2B audiences, a recognised account manager or executive sender consistently lifts both open and click rates. Generic "Customer Experience Team" senders are a program-maturity tell.

Companion tool

Try out the simulator with your data

Enter your funnel metrics and see exactly where your program is leaking — benchmarked against the dataset behind this paper, with the single highest-leverage fix surfaced for you.

Open the response rate predictor →

Note on method

This analysis is based on aggregate email send statistics for more than 50 CustomerGauge clients active during the 2025 calendar year. We examined every metric available in the standard email diagnostics — delivery rate, open rate, click rate, bounce rate, response rate, abandonment rate — and decomposed the response rate into its three multiplicative stages: delivered → opened, opened → clicked, and clicked → completed.

For each stage we tested the relationship with sending volume using rank correlation, which is robust to outliers and does not assume a linear relationship. Client identities have been anonymised throughout, with the exception of one named example (Coca-Cola Hellenic) used where the relevant evidence is already public.

The conclusions in this paper are descriptive, not causal. We can demonstrate the open-to-click bottleneck exists. We can demonstrate it is the dominant driver of response-rate variation across our client base. The mechanism we propose — that program maturity, especially closed-loop discipline, drives this pattern — is consistent with the data and consistent with our day-to-day observation of which clients do which things, but it would benefit from a properly controlled study. We are running one.

White paper: communication is all-important to drive up response rates in B2B surveys.
About CustomerGauge The B2B account experience platform. We help enterprises monetise customer feedback by linking it directly to revenue, retention and growth at the account level.

Contact research@customergauge.com
customergauge.com