How to get bigger, quicker wins by optimizing your testing workflow
To get articles like this free in your inbox, subscribe to our newsletter (the link opens in a new tab).
Last updated: November 2019
A poor experimental workflow can waste loads of your time. Here’s an extreme example: We’ve seen a company take six months to do something that took another company thirty minutes. That’s 8,760 times slower.
To grow quickly, you need to implement quickly, so our work with clients goes beyond suggesting what they should test; we build their in-house capability to “get stuff done.” This article describes a framework for speeding up your testing—so you can grow your profits quicker.
Many small changes or one big one?
If you’ve read the case study of our work with Crazy Egg, you may recall that the winning challenger homepage was much longer—and much more effective—than the control:
A reader recently asked us whether we had arrived at it via a series of iterations, or whether we had simply tested the new page against the old one.
We had done the latter.
But we wouldn’t always do that.
How much should you incorporate into each split-test? At one extreme, you could test every pixel change. At the other extreme, you could throw your whole year’s-worth of ideas into one test. The ideal lies somewhere in between. But where? Here are some of the issues that our consultants consider when deciding how many changes to include in a single split-test. These points should help you to decide which approach is best for you.
(Note that this question applies to multivariate tests as well as to A/B/n split-tests. In both cases, you’re faced with the question of how much to change in each page element.)
Why people do—and don’t—run tests
First, consider the following chart, which shows the main reasons why people do—and don’t—run split-tests:
The reason to run a split-test (represented by the green arrow) is to learn how a particular change affects conversion.
However, there are two drawbacks (represented by the yellow arrows): (i) each test costs money and takes time to implement, and (ii) each test takes time to run.
In practice, each forthcoming test can feel like a departing bus. Ideally you would put each change onto its own bus. However, buses may not come as often as you’d like, so it can be wise to squeeze in several changes, rather than waiting for the next one to come along. In the bus analogy, the yellow arrows represent the cost of each bus. The green arrow represents how important it is for each change to have its own bus.
When should you give each change its own split-test?
You may want to split-test every small change if:
- The green arrow is long: In other words, you have a strong desire to learn how each change affects conversion, maybe for one of the following reasons:
- Because the change is expensive. For example, if you’re about to offer a bold guarantee, if you’re changing the price, or if you’re about to start giving away a premium (a free gift), you need to know how successful the change is, so you can work out if it’s cost effective.
- Because the stakes are high. For example, you may be planning to implement this particular change on other sites, on other pages or in other media (e.g., in offline advertising), so a bad decision would be costly.
- You’re testing changes that you aren’t confident will be effective, so you need to rely on a split-test to tell you whether the changes are effective. This is fair enough. Most marketers are overconfident in their ability to spot a winner. Split-testing brings them down to earth with a bump.
- The upper, yellow arrow is short: The time and cost of implementing a test is low.
- The lower, yellow arrow is short: The time for a test to reach significance is low, perhaps because (i) the page gets a lot of traffic, so tests reach significance quickly, (ii) because you are testing changes that greatly outperform the control.
When should you cram many changes into one split-test?
You may prefer to include many changes in one split-test if:
- The green arrow is short. In other words, you’re okay not knowing how each individual change affects conversion.
- This may be because you’re testing many changes, and you’d be happy as long as the overall conversion rate increases.
- Or it may be because you’re highly confident that your changes will be effective. For example, you may be fixing things that are broken.
- The yellow arrows are long. This can be for several reasons. For example, because
- It takes you a lot of time and effort to get a test implemented. This can happen if (i) your workflows for creating content are inefficient, (ii) your company’s approval process is bureaucratic. Sometimes, “corporate brand police,” regulatory bodies and IT departments feel like goalkeepers who were put there to stop you from scoring, (iii) your development resources are inadequate, or (iv) your software and technology is poor or poorly integrated. Clients often ask us to help them improve these aspects of their business, aware that it makes so much difference to their overall success.
- Your changes are intertwined, so it would be fiddly or impossible to separate them into separate tests or to run them as a multivariate test.
- The page has few visitors, so a small improvement would take months to be detected (i.e., reach statistical significance). Multivariate testing (MVT) allows you to overcome this problem by carrying out several split-tests simultaneously on the same page. However, it usually takes more work to set up a multivariate test than a straightforward (A/B/n) split-test.
- You have an abundance of good, research-driven ideas to test, and implementation has become the bottleneck. There’s simply not enough time to implement and run each idea as a separate split-test. Also, this has an opportunity cost: While a profitable idea sits on your to-do list, you effectively lose money every day until it’s implemented.
How to use these insights to grow your business faster
- Consider whether you might progress faster by including more—or fewer—changes into each test.
- Try to identify the bottleneck in your testing process—then remove it. For example,
- If you’re short of good test ideas, find ways of generating more of them. (This article and this one should help.)
- If you’re limited by the rate at which you can design and implement tests, look for ways of speeding things up. You may find that certain types of test are easier to implement than others. For example, resist the urge to change page layouts. Wireframing software is great, but the second you open it, you’re committing to hours or even days of design work. Instead, explore whether your idea could be implemented in a way that doesn’t disrupt the existing page layout. Also, optimize your company’s workflow for getting content approved, looking for opportunities to remove bottlenecks or to move the approval process upstream so work isn’t vetoed at the eleventh hour. If all else fails, recruit additional designers or writers. You can justify their costs by calculating how much the conversion rate would need to increase in order to pay for them.
- If your page doesn’t have enough traffic, look for ways of getting more. For example, if you have many landing pages, consider whether you’d benefit by sending more traffic to one page, on which you can then run much quicker split-tests. Also consider whether you can drive more traffic to the page. The curve of profit–bid price is usually shaped like this…
…so if you’re currently to the left-hand side of the peak, bidding more per click will get you more traffic without decreasing your profits.
In summary, your approach to testing depends on your situation. Sometimes it’s worthwhile split-testing every small change. At other times, when ideas are plentiful and tests are scarce, it’s wise to bundle several ideas into a single test.
Either way, if you identify—and remove—the bottleneck in your workflow, you can greatly increase the speed at which you grow your company’s profits.
© 2020 Implementra Limited trading as Conversion Rate Experts. All rights reserved.