Email Marketing Conversion Optimization: Conducting your Experiment
Email marketing has always been an integral part of community outreach and engagement for businesses across the globe – even with the advent of social media platforms like Facebook and Instagram with messaging functions. That’s why it is so important for businesses to continuously find out what works for their e-mails, what makes their customers tick – and what doesn’t. Today, I’ll show you how to make sense of your e-mail metrics and how to conduct effective experiments to make it work.
1. E-mail marketing metrics
If you’re just getting started with e-mail marketing metrics, the main metrics we see are:
-No. of open/view numbers and
-Open/view rates (on sent) = no. of emails opened/ no. of e-mails sent.
-No. of clicks
-No. of unique clicks (as 1 person could click multiple links on your e-mail
-Click through rates = No. of clicks / no. of e-mails sent
-*Click-to-open rates = Unique clicks/ unique opens
-No. of conversions or the number of target action you wanted the recipient to take, for example “sign up”, “inquire now”, “invite friends” etc
2. Making sense of your metrics – questions to ask
Before diving into the sea of metrics and data upon analyzing your awesome e-newsletter, you’d have to first ask yourself: “what is the purpose of your e-newsletter”. Is it brand awareness, driving more traffic to your new travel page, getting more people to sign up for your newsletter or driving more purchases for your products? It is important to determine the purpose of your e-newsletter to understand which metric matters most. If you’re looking to drive more traffic to your site, perhaps the number of clicks be would be the most important metric to track. If you want brand awareness, perhaps more attention should be paid to the number of open and open rates. Of course, if you want more sign ups, check if there is a way to track the number of clicks on the “sign up” button on your e-newsletter and no. of sign ups site-wide.
Most often, marketers make the grave mistake of presenting their metrics as it is to their managers or board of directors. It is crucial that you start with the purpose of your e-newsletter, the metrics that should be focused on to measure if the purpose has been fulfilled and to come up with the reasons why it was so successful.
3. Conducting your experiment
Now that you’ve got your metrics and targets, how are you going to further improve on them? Here’s where “conversion optimization”, or conducting your emailing marketing experiments come into play. Here are some tips and tricks on how to conduct an effective and accurate experiment:
-Define your hypothesis
A big mistake marketers make is testing as an end in itself. They keep testing their e-newsletters without knowing their very reasons for testing. It is important that you have a hypothesis or question you’d want to investigate before you craft your experiment. For example “a blue button will encourage more customers to sign up instead of a red one”; or “customers will open our emails more if they are personalized”.
-Set your target or metric that you would view as an indicator of success/failure of your test
Subsequently, identify the metric(s) that will best indicate the success of your test (clicks, opens, etc)? It is inevitable that all the metrics will be affected by your test but you’ll have to look at the most crucial one to facilitate your analysis and report to your boss. For example, as in the first hypothesis above “a blue button will encourage more customers to sign up instead of a read one”, I wouldn’t recommend you to look at no. of opens and open rates; but rather clicks and the no. of sign-ups (aka conversions) you get via the email.
-Design your experiment: set a control group
It is important to conduct a A/B test instead of simply implementing the changes to your e-newsletter. Other factors like seasonality, customer behaviour and market trends could had impacted the performance of your e-newsletter. Hence, the best practice is to conduct a A/B test instead.
Let’s use the same “blue button” hypothesis above as an example.
Group A: 15% of the community will receive the e-newsletter with a blue sign-up button
Group B: 15% of the community will receive the e-newsletter with a red sign-up button
Here, you will make your test more consistent by eradicating any elements of bias due to seasonality and market trends that could also impact on your metrics.
-Review your results and determine limitations
When reporting to your boss, it’s important to go back to the very source, that is, to define the hypothesis of your tests, the target metrics you had planned to track, how your experiment was carried out and the final results. That being said, do not present the final results as themselves as the test may not go according to plan… what is important to your boss is the lessons learnt upon conducting this test – why is it worth spending $ to do this test? If the blue button for example, did not generate as many sign-ups as you had hoped compared to the red one, what could’ve been the possible reasons? Is it because your customers are accustomed to the red buttons used on your website that hard-wired them to think a red button indicates a call-to-action instead of a blue one?
4. Don’t give up so easily
Some email tests require more time to develop than others. For instance, “brand awareness” is a long-term objective that needs time to be proven. Even e-mail sent frequencies and subject lines need time and hence I’ll highly recommend that you conduct tests for a single hypothesis at least 2-3 times.
Best of luck!
Genin graduated with a B.A. (Political Science) from the National University of Singapore, after which she decided to venture out and explore the world of digital/online marketing. To date, Genin has written content articles for a wide range of clients including Propertyguru, Corpus.sg, tinkeredge.com and Bridal.ink. When Genin isn’t working or researching, she’s usually playing the classical guitar, musing about socio-political issues, cycling and kayaking.