• g2 badge
    ★ ★ ★ ★ ★ 4.9 rating
Get a demo and an audit of your site from a CRO expert.
Book a demo
arrow-neon
Popups

How to A/B Test Popups: Steps, Ideas & Examples

cards
Key takeaways
  • Popup A/B testing splits traffic between two or more popup versions to find which performs better on a chosen metric — CTR, order rate, or revenue per visitor.

  • CTR and revenue often point in different directions. A free gift offer can generate more clicks than a percentage discount while producing less revenue per visitor. Set your success metric before the A/B test starts.

  • Consider timing as the first A/B popup test idea. Delayed popups (20–50 seconds) reduced bounce by up to 45% and increased email capture by up to 43% vs. immediate display.

  • A control group shows what your popup actually contributes. Without one, you can't tell whether visitors who converted would have bought anyway. Comparing revenue per visitor between those who saw the campaign and those who saw nothing is the only way to measure that.

  • Small wins compound. Better timing, smarter segmentation, a simpler design — run enough tests and you end up with a campaign that performs far better than the one you started with.

Run A/B tests that measure real revenue impact of your popups

Track CTR, revenue per visitor, AOV, and more — with control groups that show what your campaigns actually contribute.

experiments-test
experiments-test

What is popup A/B testing?

Popup A/B testing is a conversion optimization method where two or more popup versions are shown to different visitor segments simultaneously to validate design and messaging hypotheses and identify which version drives better outcomes.

In its standard form, you split traffic between variants — say, a popup with a 10% discount vs. one with free item over $50 — and measure which one performs better on a chosen metric. The winning variant becomes your new baseline, and you test the next element from there.

A more advanced form of popup A/B testing is control group testing (see image below): showing a campaign to part of your traffic while the rest sees nothing, so you can measure the true incremental impact of running a campaign at all.

Comparison of control and two campaign variants shows Variant A as the winner, boosting revenue per visitor and lowering bounce rate.
Comparison of control and two campaign variants shows Variant A as the winner, boosting revenue per visitor and lowering bounce rate.

Case study

See Nutrimuscle's popup strategy that helped increase average order value with a welcome campaign:

Standard popup A/B testing vs. control group testing: which method to use?

Standard A/B testing

Control group testing

What it compares

Variant A vs. variant B

Campaign vs. no campaign

Question it answers

Which version performs better?

Is the campaign driving revenue at all?

Default success metric

CTR

Revenue per visitor

Traffic split

Adjustable (e.g. 50/50, 70/30)

Adjustable (e.g. 70% see campaigns, 30% see nothing)

Targeting

Each variant follows its own rules

Control group follows the same targeting rules as the campaign — same audience, same triggers, just no display

Baseline in results

Lowest-performing variant

Control group

Uplift calculated against

The weakest variant

Visitors who saw nothing

What you measure

CTR, signups, attributed conversions

Full visitor journey: revenue per visitor, order rate, AOV, bounce rate, session depth

When to use it

Optimizing a campaign that's already running

Validating whether running a campaign is worth i

The key distinction:

A/B testing tells you which popup wins.

-

A control group tells you whether winning impacted your revenue.

Note:

In A/B testing website popups, the control group targets the exact same audience as your campaign — same pages, same triggers, same visitor conditions — but shows nothing. That's what makes the comparison valid.

Popup A/B testing: an example from ecommerce

OddBalls, a UK clothing brand, ran two spin-to-win popup campaigns to improve lead capture and sales.

The first one targeted returning prospects with the goal of driving email signups and first-time purchases (desktop and mobile versions):

campaign for returning visitors
campaign for returning visitors

The second was shown exclusively to existing subscribers to drive more purchases:

campaign for existing subscribers
campaign for existing subscribers

Both featured colorful wheels with 5–10% discounts.

Within each popup, they ran specific A/B tests:

Test

What changed

Result

Trigger timing (returning prospects)

On-landing vs. exit-intent

On-landing significantly more effective

Post-close tab (existing subscribers)

Tab shown vs. hidden

62 orders (no tab) vs. 53 orders (tab shown)

A few best practices stand out from these popup A/B tests:

  • Mobile dominates. 7,200 views came from mobile vs. 862 on desktop. Mobile popup design isn't optional anymore

  • Modest discounts are enough. Even 5–10% discounts were sufficient to drive meaningful visitor engagement

  • Segmentation improves both tests. Running separate campaigns for prospects and subscribers meant each popup A/B test was measuring a real difference in message relevance, not just audience variation. So, the results were cleaner and more actionable.

  • Targeting returning visitors improved lead quality. Concentrating on visitors who'd already shown interest produced higher-quality leads and better website conversions than broad targeting.

The results across both campaigns:

  • 25.3% CTR

  • 558 conversions

  • £31,443 in revenue.

performance of gamified popups
performance of gamified popups

Main takeaway from this popup A/B test example:

Small wins add up. Better timing, smarter segmentation, a simpler design—each one is a minor improvement. Run enough of them and you end up with a campaign that performs far better than the one you started with.

How to A/B test popups

  1. Choose a goal and success metric

  2. Write a hypothesis and choose what to test

  3. Create the first popup

  4. Set up the A/B test

  5. Build and publish your variants

Step 1: Choose a goal and success metric

Before you go to your popup builder tool, decide what success looks like — and pick the metric that measures it accurately.

Many optimize for CTR by default, but CTR measures clicks, not outcomes. A popup with a 20% CTR that drives no purchases is worse than one with a 10% CTR that generates significant revenue. The metric you choose determines what the test concludes.

The three success metrics in Wisepops:

  • CTR — percentage of visitors who clicked the CTA. Use this for lead generation campaigns where the click is the goal (email signup, form submission).

  • Order rate — percentage of visitors who completed a revenue-associated goal. Use this when the popup is designed to drive purchases.

  • Revenue per visitor — total revenue divided by all visitors exposed to the campaign, whether they clicked or not. Use this when you want a complete picture of commercial impact, particularly with control groups.

If you're testing with a control group — where one segment sees no campaign — CTR becomes meaningless as a primary metric (visitors who see nothing can't click anything). In that case, order rate or revenue per visitor are the right choices.

Step 2: Write a hypothesis and choose what to test

One element per test. Write a hypothesis first: "Changing X to Y will improve Z, because [reason]." This keeps the test focused and makes the result useful even when you lose.

Weak:

"A different offer might convert better."

Strong:

"Replacing the free product giveaway with a store credit of equivalent value will increase signups, because credit is more universally applicable than a specific item — visitors can apply it to whatever they already want."

That second hypothesis came from a real test: a brand offering a free item vs. $50 store credit found the credit drove 21% more signups, despite the pillow having higher perceived monetary value.

The reason almost certainly isn't offer size — it's relevance. Credit works for everyone; a pillow only works if you want a pillow.

Let's try a more advanced hypothesis:

Advanced:

"Adding a product-choice question as the first step — before asking for an email — will increase opt-ins because it creates micro-commitment and makes the popup feel personalized."

One ecommerce brand we work with tested exactly this: a standard multi-step email/SMS popup with countdown vs. the same flow with a gamified product-choice question inserted first.

The version with the product question achieved 17.9% CTR vs. 5.5% for the standard flow — a 225% increase in four days.

What to A/B test in popups to improve conversions, in rough priority order:

  • Offer type: free gift vs. store credit vs. percentage discount vs. fixed discount — not all incentives work equally for every visitor

  • First step: direct email signup form vs. yes/no micro-commitment vs. product-choice question before the signup form

  • Trigger timing: on-landing vs. exit-intent vs. delay (10s, 30s, after 2 page views)

  • Format: standard popup vs. full-screen vs. spin-to-win vs. corner slide-in

  • Multi-step design: discount reveal only vs. discount + product recommendations in the confirmation step

  • CTA copy: benefit-focused vs. urgency-focused — "Claim your discount" vs. "Offer ends tonight"

  • Visual: product image vs. lifestyle photo vs. no image

Step 3: Create the first popup

Build the first version — the baseline everything else is measured against. This is usually your current best-performing popup, or if you're starting fresh, the version you consider most likely to work.

In Wisepops popup builder, open the popup dashboard and click New popup campaign. Choose a template from the gallery, customize it in the editor, and save:

Testing an existing popup campaign?

If you already have a live popup, you don't need to rebuild it.

Go straight to Step 4 — clicking the A/B Test icon on any active campaign will use it as your control automatically. The experiment tracks data only from the moment it starts, so prior performance of the original campaign won't skew results.

Step 4: Set up the A/B test

Hover over your campaign in the dashboard and click the A/B Test icon.

Also, you can go to Experiments in the main menu and start your test there.

The first step is to choose to either duplicate the campaign (to build the variant from the same starting point) or select a different existing campaign to test against:

choosing second campaign variant
choosing second campaign variant

Next, traffic allocation.

By default, traffic splits 50/50. You can adjust this — for example, 80/20 if you want to limit exposure to an untested variant while still gathering data. Equal splits reach significance faster:

traffic allocation in popup ab tests
traffic allocation in popup ab tests

Control group: You can also exclude a portion of visitors from seeing any campaign at all.

This segment becomes your baseline for measuring incremental impact — how much revenue, engagement, or conversions the campaign is actually generating above what would have happened without it.

control group in popup ab tests
control group in popup ab tests

Next, choose which success metric determines when the experiment concludes. The test is marked conclusive when one variant achieves 95% confidence over the others or the control group.

Your options:

  • CTR — share of visitors who clicked the CTA

  • Order rate — share of visitors who completed a goal with revenue attached

  • Revenue per visitor — average revenue across all exposed visitors

  • Campaign goal — if your campaign has a specific conversion goal set, that goal is available as a fourth option

success metric ab test
success metric ab test

The metric you pick doesn't limit what you can see — all metrics are visible in the results view regardless. It only determines when the experiment is declared concluded.

Experiment conclusion: Choose between manual and automatic.

With automatic, Wisepops rolls out the winning variant to 100% of traffic once significance is reached — no action needed.

Or, you can also set a maximum end date manually so your popup A/B test closes on a fixed timeline even if significance hasn't been reached; the best-performing variant at that point gets rolled out.

goal for popup ab test
goal for popup ab test

Once you chose the test conclusion, Wisepops will let you know that the initial results will be available in the dedicated Experiments dashboard in 24 hours:

end
end

Step 5: Build and publish your variants

Click your second campaign variant to open the editor (image below).

There, change the single element you defined in your hypothesis, and save.

second campaign for test
second campaign for test

Lastly, publish from the campaign overview by choosing Published instead of the current Draft status.

The experiment begins tracking once the first visitor interacts with a variant.

publish ab test
publish ab test
How long to run a popup A/B test?

There's no fixed timeline. Statistical significance depends entirely on your traffic volume. Wisepops calculates this automatically using Bayesian statistics and notifies you when the threshold is met with an automatic email (image below).

Looking at concluded experiments in Wisepops:

  • 44% of popup A/B tests that reached a winning variant did so within two weeks

  • High-traffic websites often get clear answers within one week

  • The average duration across all concluded popup A/B tests is 41 days — but that's pulled up significantly by long-running tests, some of which were never properly concluded.

Call a winner only after statistical significance is reached, and don't edit variants after launch — any mid-test change invalidates the data already collected.

The test concludes only when one popup variant reaches statistical significance at 95% confidence — Wisepops sends an automated email when that happens:

email
email

How to read the results of popup A/B tests: example

Say, you tested two versions of a welcome popup targeting first-time visitors.

The Experiments dashboard inside Wisepops popup builder shows these results:

results of ab testing in wisepops
results of ab testing in wisepops

Version B brought 42% more visitors into the funnel. But version A generated 17% more revenue per visitor. The percentage discount scaled with what people spent; the free gift got attention but didn't change how much they bought.

The right call depends on your goal:

  • Optimizing for list growth? Version B wins because it engaged more people.

  • Optimizing for revenue? Version A wins by a meaningful margin.

This is why choosing your success metric before the test starts matters. Both conclusions are correct — they're just answering different questions.

What to do next: declare version A the winner on revenue, then move to the next variable — perhaps testing the $50 threshold itself, or whether showing the discount code immediately in the popup (rather than via follow-up email) increases order completions further.

Declaring a winner

Once significance is reached:

  • Click Duplicate as new campaign on the winning variant

  • This stops the experiment automatically

  • The new campaign starts with a clean performance record — previous test data doesn't carry over

  • Run your next test on a different element, using this winner as the new baseline

Note:

If results are inconclusive, move to a test with a larger potential impact. Offer type and trigger timing tend to produce bigger differences than visual changes like button color or font.

Common mistakes in popup A/B testing

The most consistent errors aren't running the wrong test — they're misreading the results:

  • CTR up ≠ revenue up. A delayed popup targeting more engaged visitors will almost always show a higher CTR than an immediate one, but that doesn't mean it's generating more total leads or revenue. Always check total lead volume and revenue per visitor alongside CTR.

  • Testing audience segments instead of campaign elements. Showing variant A to new visitors and variant B to returning visitors doesn't tell you whether the variant is better — it tells you that two different audiences behave differently, which you already knew. Keep targeting rules identical across variants unless segment response is specifically what you're testing.

How to calculate statistical significance in popup A/B tests

Enter your visitors and conversions for each variant, and the calculator instantly tells you whether your result is statistically significant or how long you need to keep your A/B test running.

Calculator

Variant A (control)
Variant B
Add variant

Results

Variant A (Control) Conversion Rate
-
Variant B Conversion Rate
-

Variant B vs Variant A (Control)

Relative Uplift
-
P-value
-
Is it statistically significant?
-

A/B testing popups: FAQ

What should I A/B test on a popup first?

Start with trigger timing or offer type — these consistently produce the largest differences. Copy and design tests are faster to run but tend to produce smaller gains.

See also: popup timing guide.

How long should a popup A/B test run?

Until you reach statistical significance — 95% confidence. Wisepops calculates this automatically. Across concluded experiments in Wisepops, 44% of tests with a declared winner reached significance within two weeks.

What's the difference between A/B testing popups and using a control group?

A/B testing compares two popup variants against each other. A control group compares a popup against no popup at all — measuring whether the campaign adds revenue above what visitors would have done anyway.

Can I A/B test popups with different triggers or targeting rules?

Yes, but be careful which metric you use to evaluate. A popup triggered after 8 seconds will show higher CTR than an immediate one — not because it's better, but because it only reaches visitors who stayed. Look at total lead volume and revenue per visitor, not CTR alone.

Learn more: complete guide to popup targeting.

How many variants should I test at once?

Start with two. Each additional variant slows time to significance. Run follow-up tests on additional elements once you have a winner.

Does basic popup A/B testing measure revenue impact?

Basic A/B testing measures attributed revenue — purchases from visitors who clicked your campaign. To measure true incremental impact (whether showing the campaign at all drives more revenue), you need a control group.

Can I edit a variant after launching the test?

No. Changes made after launch invalidate data already collected. Conclude the test first, then start a new one.

How is statistical significance calculated for popup A/B tests?

Statistical significance measures how confident you can be that a result is real and not due to random variation.

In popup split testing, the standard threshold is 95% confidence — meaning there's only a 5% chance the difference between variants occurred by chance. Wisepops calculates this automatically as data comes in, so you don't need to run the numbers manually.

Get started
in minutes

Start converting more visitors today.
Get started in minutes and see results right after.

Help