When you’re thinking of CRO, conversion rate optimization, it may seem like it’s an intimidating topic to go into. You may think, “where should I start?” That’s because there are a lot of different test setups (A/A, Multivariate, A/B, A/B/A, or Cross Browser/Device testing to name a few) that you could use, and a lot of questions that need to be answered.
So, this means that you’ll need a lot of time and brainpower to figure this out. Don’t worry too much about it though, this guide is here to help you out. It’ll walk you through my research, and give you some of the pros and cons for the most popular test setups, so that you can quickly decide which of these experiements fit your company’s online goals.
First off, it’s good to understand what an A/A test is. Simply put, it is testing a landing page/ advertisement against an exact copy of itself. The purpose of this test is validate the testing software and design to see any serious errors that will need to be corrected. For a deeper explanation, look here.
- It validates the split testing tool and the test’s format
- It helps you avoid crippling errors
- You may not need A/B tests if the ad is successful enough
- It takes up a lot of valuable experimenting time
- A/A tests need many attempts in order to give significant data. (100 times or more)
Only use A/A testing if you have the time and money for a detailed experiment.
Multivariate testing is when you make many slight changes between versions of the pages you’re using. The purpose of this design is to create the ideal advertisement that you could use for future campaigns. Below is an example of one part of a multivariate test (original above, variate below):
- Multivariate gives you detailed test results of how variables interact with each other
- You could use changes for future campaigns
- It allows you to make subtle changes that are easy to manipulate
- Multivariate is hard to set up and manage
- It needs a lot of site traffic to be accurate
Businesses that have skilled CRO professionals and lots of money do well with this type of test.
This is the classic test that is commonly used to test two variants of a page against each other. If you want ideas of what to attributes to change in A/B tests, here’s a good list of them: http://backlinko.com/conversion-rate-optimization.
Check out more examples here:
- Test rounds end quickly.
- A/B tests are less complex
- They don’t need a large amount of site traffic
- You can install advanced analytics tools like heat mapping, phone call tracking, etc…
- You may need many tests to be done if you want to pinpoint what factor makes sales increase.
This test is perfect for smaller companies who want results quickly.
This is a test where you combine A/A testing with A/B. The A variants will get 25% of the traffic, while the B version gets the remaining 50%.
- Gives validation of the testing tool/set-up
- Gives you the quick results that you would get from an A/B test
- The reporting interface may need to be more complex, because you are dealing with an odd amount of variables.
- The A variants have a higher chance of error, because they have a smaller sample size than the B version of your page.
This is a good option for smaller companies with a smaller budget if they want to make sure that they set up their split test correctly.
Cross Browser/Device Testing
The purpose of these tests is to figure out how using a different platform will change the way a potential customer interacts with your material. Many people use cross browser/device also to validate that their testing tool/set up works correctly, thus not needing to run a A/A test.
- These tests are the quick and easy way to do the A/A design. The different browsers and devices could expose errors that you could quickly fix.
- Takes up valuable time. This test isn’t crucial to run.
- Deals with other variables that could skew data, such as fewer people using Internet Explorer compared to Google Chrome.
Use this type of test if you want a quick and inexpensive way to validate your split test set-up.
After all of this, I want to remind you that there isn’t a perfect test design, and there aren’t one size fits all solutions. However, there aren’t duds that will never work. If you look at this guide and try out a design, and it fails, don’t fret. Dust yourself off, and run another test. You may get some unexpected results from your efforts.