If you have had your website up for a while, you might want to see how you can drive more people to your website. Changing some parts of it could drive that traffic. One tool you could use to tweak your websites or digital channels is split testing. Also known as A/B testing, it is a process that compares two versions of a web page—a control version (the "original") and a variation—to see which does better with online users. The goal of testing is to identify the version that results in more conversions.
If you want to test how effective your titles are at lifting conversion rates, you might perform a split test by coming up with two versions of the same title. So, "How To Build A Responsive Website For An E-Commerce Store" might become "5 Tips For Making Your E-Commerce Store Mobile-Friendly."
Ideally, the only changes should be in your posts' headline. However, we do not live in an ideal world, and the changes you observe in your website traffic might be due to various factors. To keep your data as clear as possible, here is a step-by-step guide to split testing.
Instead of a vague sense of wanting to drive traffic to your website, you should have a data-driven reason for wanting a split test. Does your analytics data show that visitors are clicking and then leaving your website right away? Do they visit only specific pages but not others? Knowing the reasons why you're testing will help make the next steps more concrete.
Ask yourself what you are trying to improve and how your split test can confirm the actions necessary to get that improvement. For instance, you could say that "creating a more readable headline will make users click on the link and read the body copy, which will increase the time spent on the page." Your test results can prove or disprove this statement.
In conversion optimization, results are acceptable if they are 95 percent significant. This level of significance means there is only a five percent chance that your results are not directly due to the tested factors. Your page needs to have statistically significant results, which you will reach if you have the right sample size. In this context, your sample is the number of visits your pages get.
You can simplify the process of arriving at the sample size with tools like Optimizely's A/B Test Sample Size Calculator. You have to plug in the baseline conversion rate and the minimum detectable effect to get the statistical significance. The baseline is your original page's conversion rate. The higher the baseline, the smaller your sample size would be.
Meanwhile, the minimum detectable effect refers to the smallest relative change to the conversion rate that is due to your efforts. For example, if you have a 25 percent minimum detectable effect, you only consider lifts or drops in conversion rate higher than 25 percent. The lower your minimum detectable effect, the more visits you will need to verify your hypothesis.
Your tests are not happening in a vacuum, and factors you didn't account for could significantly affect your test, which could affect the outcome. Though you cannot prevent every confounding variable from cropping up, you can keep them to a minimum.
For example, you can ensure that traffic sources and referring ads are the same for both pages. Also, note that obstacles can pop up in the middle of the test, not just at the beginning. You have to watch for factors that could produce misleading data throughout the test.
Once you've implemented the changes and minimized other variables, you have to take one look through all the components of your test before it goes live. Does the landing page look the same in both groups? Are both CTA buttons working? Are the links in the ads correct? Are the ads you are using identical? This last check will help you catch anything that fell through the cracks during the preparations.
Ensure that the traffic is coming from the same place and not exhibiting the "selection effect." You can observe this in tests that send captive audience members to test pages, which means results are invalid from the start. For instance, if you send people from your email list to your pages, you're driving traffic comprised of users already convinced of your product or service's usefulness. This is loyal traffic, which is not representative of the preferences of the rest of the internet.
Keep running your test until you reach the sample size you identified in your calculations. If you hit that number in fewer than seven days, keep the test running. During some days of the week, visitors will be more receptive to marketing messages. You need to account for different conditions and see how they affect conversions.
When analyzing your results, keep in mind the minimum detectable effect you set for the test. While seeing a lift in your results is always an encouraging sign, anything smaller than your minimum cannot be a reliable indicator of your methods' effectiveness. Even if the results exceed the minimum detectable effect, it does not mean you can stop optimizing the page. The result means that the page performs better than it used to.
However, it is not proof that this is the best possible page for this product or service. If the test produced a worse variation, it is not a failure; it merely means you have found something that does not convert for your page. Keep testing, and eventually, you will find what works for your audience.
"Done is better than perfect" is a maxim most business owners believe. It's better to have a product out in the world and tweak it as needed, instead of working on it in a closed laboratory, chasing perfection, before releasing it to the public. This saying applies to everything in business, from the actual products and services to the websites you use.
When you're figuring out what works for your audience, it helps to have a partner who can steer you in the right direction. Ranked helps you refine your content strategy from start to finish. We provide affordable, top-notch SEO solutions for enterprises and agencies; get in touch with our team or activate your account today!