CRO

Sequential Testing

Definition

A testing approach that allows results to be evaluated at multiple points during the experiment while controlling for false-positive risk. Sequential methods are useful for high-traffic programmes that need faster decision cycles than rigid fixed-sample tests allow. They require a pre-defined stopping framework; simply peeking at results every day is not sequential testing and leads to overstated lift.

How Sequential Testing works in practice

Sequential Testing matters most when teams are trying to make better decisions around landing page clarity, conversion friction, trust, and user decision-making. The short definition gives the surface meaning, but the practical value comes from knowing when this concept should actually influence strategy and when it should not.

In real-world work, Sequential Testing is rarely important on its own. It usually becomes useful when paired with cleaner measurement, stronger page or funnel structure, and a clear understanding of what business outcome needs to improve. It is closely connected to A/B Testing, Bayesian Testing, Statistical Significance because those concepts usually shape how Sequential Testing is measured or applied in practice.

A good way to use Sequential Testing is to treat it as a decision aid rather than a vanity number. If it helps explain why performance is improving, stalling, or getting more expensive, it is useful. If it is being tracked without any operational consequence, it is probably being overvalued.

Your digital consultant

Hi, I'm Wameq.

If traffic is coming in but conversions aren't, the page is doing something wrong — I can tell you what.

Let's talk →
Why this matters

This term sits in the CRO category, which means it is most useful when evaluating landing page clarity, conversion friction, trust, and user decision-making. The goal is not to memorize the label. The goal is to know when it should change a decision, a page, a campaign, or a measurement setup.

Related terms

A/B TestingAnalytics

A controlled experiment comparing two versions of a page, ad, or email to determine which performs better for a defined metric. Statistical significance is required before declaring a winner and rolling out changes.

Bayesian TestingCRO

An experimentation approach that estimates the probability one variant is better than another as data accumulates, rather than relying only on fixed-horizon significance thresholds. Bayesian testing is often easier for non-statisticians to interpret because results can be framed as "Variant B has an 89% chance of beating control." It still requires good experimental design, sufficient traffic, and clear loss functions for decision-making.

Statistical SignificanceCRO

A measure of confidence that an observed difference between test variants is not due to random chance. In A/B testing, results are typically considered significant at the 95% confidence level, meaning there is only a 5% probability the observed lift happened by chance. Stopping a test before reaching significance is one of the most common and costly mistakes in CRO — the result looks like a win but the lift evaporates once the test ends.

Sample Ratio MismatchCRO

A validity problem in A/B tests where traffic is not split in the expected ratio between variants — for example, a 50/50 test where one variant receives 47% and the other 53% of traffic. SRM typically signals a technical implementation problem: a flickering element, bot traffic imbalance, or incorrect trigger logic. Any test with SRM should be invalidated and rerun, as the results are statistically unreliable regardless of what the conversion data shows.