Sample Ratio Mismatch
A validity problem in A/B tests where traffic is not split in the expected ratio between variants — for example, a 50/50 test where one variant receives 47% and the other 53% of traffic. SRM typically signals a technical implementation problem: a flickering element, bot traffic imbalance, or incorrect trigger logic. Any test with SRM should be invalidated and rerun, as the results are statistically unreliable regardless of what the conversion data shows.
How Sample Ratio Mismatch works in practice
Sample Ratio Mismatch matters most when teams are trying to make better decisions around landing page clarity, conversion friction, trust, and user decision-making. The short definition gives the surface meaning, but the practical value comes from knowing when this concept should actually influence strategy and when it should not.
In real-world work, Sample Ratio Mismatch is rarely important on its own. It usually becomes useful when paired with cleaner measurement, stronger page or funnel structure, and a clear understanding of what business outcome needs to improve. It is closely connected to A/B Testing, Statistical Significance, Multivariate Testing because those concepts usually shape how Sample Ratio Mismatch is measured or applied in practice.
A good way to use Sample Ratio Mismatch is to treat it as a decision aid rather than a vanity number. If it helps explain why performance is improving, stalling, or getting more expensive, it is useful. If it is being tracked without any operational consequence, it is probably being overvalued.

Your digital consultant
Hi, I'm Wameq.
If traffic is coming in but conversions aren't, the page is doing something wrong — I can tell you what.
Let's talk →This term sits in the CRO category, which means it is most useful when evaluating landing page clarity, conversion friction, trust, and user decision-making. The goal is not to memorize the label. The goal is to know when it should change a decision, a page, a campaign, or a measurement setup.
Related terms
A controlled experiment comparing two versions of a page, ad, or email to determine which performs better for a defined metric. Statistical significance is required before declaring a winner and rolling out changes.
A measure of confidence that an observed difference between test variants is not due to random chance. In A/B testing, results are typically considered significant at the 95% confidence level, meaning there is only a 5% probability the observed lift happened by chance. Stopping a test before reaching significance is one of the most common and costly mistakes in CRO — the result looks like a win but the lift evaporates once the test ends.
An experiment testing multiple page elements simultaneously to find the highest-performing combination — for example, testing three headlines and two CTA colours in a single test. MVT can detect interaction effects that A/B testing misses, but requires 5–10× more conversion volume per variant to reach statistical significance. Best reserved for high-traffic pages where single-element A/B tests have been exhausted.
Learn more: related articles
How User Behaviour Tells You to Improve Your Website
Most conversion problems are not traffic problems. The fix is on the page. User behaviour data — scroll depth, heatmaps, rage clicks, session recordings and form drop-offs — shows you exactly where visitors are losing interest and why. This is how CRO actually works in practice.
How to Track Conversions in Google Analytics 4 (Step-by-Step)
A practical step-by-step guide to set up GA4 conversion tracking correctly using GTM, event naming standards, and validation workflows.
CRO in 2026: How to Systematically Improve Conversion Rate Without More Traffic
Getting more traffic is expensive. Converting the traffic you already have is the highest-ROI activity in digital marketing. Here is the systematic CRO framework we use with clients.
