Doug hit inbox zero the other night and apparently email is dead. Just like coffee doesn’t work anymore. All jokes aside according to HubSpot’s data, email is nowhere near dead. Sales email sends are up 78% from a year ago and marketing email sends are up 36% from a year ago. If you haven’t guessed already we’re talking about email in today’s episode along with the metrics you should be using to measure performance and whether or not A/B testing is important.
Audio:
Show Notes
Editor's Note: If you haven't already, leave the podcast a review either on Apple Podcasts or Spotify.
BIG NEWS
We live in an attention economy and everything is fighting for our attention in the digital space. Mike got an email the other night during a time where he usually can focus on other things and the email was from Doug that includes the article mentioned throughout this podcast - You’re Likely Measuring Your Email Wrong (& It’s Killing Your Engagement). Not only does the post talk about baseball, it highlights the attention economy and talks about measurement in a way that’s easy for people to understand.
It’s time to have a better term than the attention economy because there’s so much noise today that there is a “so what” factor to someone getting our attention. "So what if your tweet got my attention because half a second later my attention is elsewhere." You should start looking at it more as the engagement economy.
When you think about engagement and community you have to be a storyteller. Before your market wants a story they want their questions answered. Engagement isn’t uniform.
Similarly people are killing their sales process and email engagement from all of this. Why do you send email? For numerous reasons. So then why do you measure every email with the same metrics?
There are emails you send that you should want action from like when someone downloads something and they receive the email for the piece of content. You want them to open and click in the email otherwise it failed. Did you really get engagement from that contact if they filled out a form but didn’t open the content?
Here at Imagine we have what we call the Smart Growth Roundup. It’s a biweekly email newsletter that is for content curation. We’re following all the insights so you don’t have to, and we want to be top of mind for those that receive the email. The job is for the people to want to see it, but we don’t expect them to open and click every time.
When you send emails like this you shouldn’t expect people to open or click on every single one. The email is creating an inbox impression where the person receiving the email sees your name in their inbox and it’s a small reminder that you are still there.
By looking at your emails from a holistic perspective you can start to understand if you need a better content strategy, a better suppression strategy, or a different segmentation strategy. You also have to look at the emails from the standpoint of the job to be done to determine what metrics you should be paying attention to.
Question of the Episode:
The question of this episode is: How important is A/B Testing for your site, email, etc.?
This is a topic that is near and dear to Mike’s heart, and he makes the bold statement that most, if not all, small to medium sized businesses shouldn’t really be doing A/B testing whether it’s on an email or their website and landing pages.
Instead businesses should consider multivariate testing. A/B testing is amazing when it’s the right thing to do, and it’s the worst thing to do when your results are inconclusive and people take the wrong action based on those results.
If you are going to A/B test, you have to ask yourself what are you A/B testing for? Likewise, if you take a page and multivariate test it, you want the results to be significant. The great thing abut this is that you will have only small incremental wins or losses that you can learn from and it allows you to try new things without causing huge mistakes.
Doug looks at this type of testing more around who is converting rather than wins or losses. He believes that most things that you change don’t make any difference. He does it to see the outliers and surprises that he didn’t expect because he can learn from those. If you’re testing to optimize, take the outliers out of the data. If you’re testing to learn, keep them in.
This is hypothesis-driven growth (you may also know it as Bayesian testing). The biggest danger to testing overall is that once something starts working, people stop testing until something stops performing well. Every time you take an action, whether you’re changing something or not, you should have a hypothesis on the outcome of the change. That will keep you thinking about the outcomes even if that outcome is that the results will be the same as before.
Takeaways:
Mike - A/B testing is the world the silver bullets live in and you shouldn’t waste your time on it.
Doug - It’s the comfort of conviction over the discomfort of doubt. Normalize doubt, be clear on your why and everything else will take care of itself.