A/B Testing
Definition
Comparing two versions of a webpage or element to see which performs better. Also called split testing. Essential for data-driven optimisation.
What is A/B Testing?
A/B testing (split testing) compares two versions of something to determine which performs better. Half your visitors see version A, half see version B, and you measure which achieves better results.
How A/B Testing Works
- Create two versions (A and B)
- Split traffic between them randomly
- Measure a specific outcome (conversions, clicks, etc.)
- Compare results statistically
- Implement the winning version
What Can You A/B Test?
Headlines
"Get a Free Quote" vs "Request Your Quote Today"
Call to Action Buttons
Colour, text, size, position
Page Layout
Different arrangements of content
Images
Photos, illustrations, or no image
Forms
Number of fields, layout, button text
Prices
Different price points or display formats
Copy
Long vs short, formal vs casual
Why A/B Testing Matters
Remove Guesswork
Data shows what actually works, not what you think works.
Incremental Improvement
Small wins compound over time.
Understand Users
Learn what resonates with your audience.
Risk Reduction
Test changes before full rollout.
A/B Testing Requirements
Sufficient Traffic
Need enough visitors for statistically valid results. Low-traffic sites take months per test.
Clear Metric
Know exactly what you're measuring.
One Variable
Change one thing at a time. Multiple changes = unknown cause.
Patience
Tests need time to reach statistical significance.
A/B Testing Tools
- Optimizely
- VWO
- PostHog (free tier available)
- Convert
- Simple tools built into platforms
Common Mistakes
Ending Tests Early
Wait for statistical significance.
Testing Trivial Things
Focus on high-impact elements.
Ignoring Context
Seasonal factors, external events affect results.
Not Implementing Winners
Testing is pointless if you don't act on results.