What if I told you, though, that tests, such as A/B testing, are not the way to go? What if I told you that as a small to mid-sized business you could be wasting your time? What do you do then? Let’s dive into the question -- how important is A/B testing for your site, email, etc.?
From an older blog that we have on Getting Started with A/B Testing, “A/B testing is a way to measure the effectiveness of one version of something, like a landing page or CTA, versus another. It’s an experiment where two versions of one piece of content are shown to two audiences (ideally of the same size) to see which one produces better results.”
You can A/B test most anything in your emails, landing pages, website pages, paid ads, and more. The most important thing to note is that if you choose to A/B test, you need to have a reason for why you’re running a test, and you need to pick one element to test.
Before you decide to A/B test, think about the pros and cons to doing so and whether it’s worth the effort and time you’re going to spend in the process.
Experimentation. Any time you test, you’re experimenting with your audience to find what they like and dislike when it comes to the content that you put out.
Gain insight into what gets people to convert. While you’re figuring out more about your audience’s likes and dislikes, you’re also seeing what is going to get them to take the next step whether it’s clicking on your email or deciding to download a piece of content. These insights will help you to make future decisions, too.
Testing can be too small to make a true impact. The size of your audience that you’re testing on matters. If you don’t test a large enough audience, how will you know that the result is the true result? And if you make that change, is it really going to do anything?
It’s possibly a waste of time. Because testing doesn’t necessarily make an impact, you’ve just spent a lot of time and energy into experimenting for a lot of nothing. Think about what you could do to make a true impact with the time it took you to figure out that email A performed better than email B.
For most small and mid-market businesses, the short answer is no. A/B testing is amazing when it’s the right thing to do, but for the most part it can be time consuming. This might come as a surprise, but hear it from Mike Donnelly, founder of Seventh Sense and Doug Davidoff, CEO of Imagine discuss why companies shouldn’t be A/B testing in this segment of Episode 71 of The Blackline Between Sales and Marketing podcast.
In the beginning of the segment, Mike makes a bold statement, “most if not all small to medium sized businesses shouldn’t be doing A/B testing whether it’s specifically an email, and a lot of instances on their website.” It’s a waste of time, energy and resources and it depends on why you are A/B testing in the first place and how you define it.
Like mentioned before, it’s amazing when it is the right thing to do and when you have big enough data volumes to get a conclusive result from the test. But for most small and mid-sized companies, they don’t have that type of data.
Think about it this way, you go out to survey the Atlantic Ocean and gather 1,000 gallons of water. When you pull out the tank you see there are no jellyfish in your sample, so you come to the conclusion that there are no jellyfish in the Atlantic Ocean. That’s an inconclusive result because there are definitely jellyfish in the Atlantic Ocean. That’s what A/B testing is like for small to mid-sized companies. The worst part about this type of result is that companies will then act on that inconclusive result. You should avoid doing the same thing.
So, as a small or medium sized company, how do you go about testing to make sure that the results you’re getting are conclusive and meaningful? There are a few other tests that are better to use.
You’ve probably heard of multivariate testing, but if this is a new concept for you, Optimizely has a great definition:
“Multivariate testing is a technique for testing a hypothesis in which multiple variables are modified. The goal is to determine which combination of variations performs the best out of all the possible combinations.”
The great thing about multivariate testing is that you can test multiple variables (but don’t test too many) on pages with the result of small wins or losses that aren’t going to tank your performance.
Consider this option if you don’t know what you don’t know because it gives you an opportunity to test a multitude of things at once so you can get a feel for what changes will work or not work. This is also a great way to see what is working to narrow down to the areas you’d like to focus on or areas of your content you want people to be drawn to.
When we do testing here at Imagine, we look at these tests in a different light. We test to see the outliers of the results and to find the learning points of what’s working or not working. When you test to optimize, you leave out the outliers, but when you test to learn, you leave the outliers in the results. Doing so allows you to ask why these outliers are happening, what they mean for your site and what you can learn or improve moving forward. Whereas if you’re optimizing, you’re not looking at the outliers so that you can make a decision on which version(s) to go with.
The other test you can perform is what we call hypothesis driven growth. You might be familiar with it’s other name - Bayesian Testing.
This type of testing happens when, for example, you send out an email A today and see what happens, make some hypotheses and decide what you’re going to change in the next email you send out. The next email you send becomes email B. Again you’ll look at what happens, make some hypotheses and decide what you’re going to change next. Email B then becomes email A and the email that you send after becomes email B again. This cycle continues forever. Email A feeds into email B.
The importance of this type of testing is that you’re always testing, always hypothesizing, always learning, and always improving. It’s a never-ending cycle of growth.
Or should I say the danger of not testing.
While there is the danger of testing around inconclusive results leading you to make bad decisions, there’s a danger of testing when you find out things are working. When this happens, teams get complacent and stop testing. They’ll only begin testing again when something goes wrong or begins to drop off unexpectedly.
Never stop testing. Whether it’s A/B, multivariate, hypothesis driven growth or something else, you need to keep testing. And you need to go into every test with a hypothesis. Even if your hypothesis is that doing the same thing as last time will produce similar results, that will keep you connected with your work and the outcomes that happen from it. That way when something doesn’t perform like you hypothesized, you can go back to see where things went differently than expected.
Even after a test is over and you see that version A performed better than version B, test to disprove that version A is really better. This will not only allow you to keep testing, but it will also help alleviate making bad decisions on inconclusive results.
Testing plays a big role when it comes to making decisions and changing things up for your emails, website, landing pages, ads, etc. The type of test you choose, the why behind testing, and how you use the results is a big determinant of making an impact on your assets or not. Don’t test just to test. Don’t test the insignificant things that aren’t going to matter. Test to learn. Test to make a mark and make a difference to your company and your customers and/or prospects. And for all the small and medium sized companies out there, consider using something other than A/B testing.