In the world of abandoned cart emails, one type of test, the A/B test – commonly known as a split test is a powerful tool used to increase campaign performance over time.
We perform tests everyday without even realizing it. You test the water temperature before hopping in the shower, test the brakes on your car before a long trip. Testing shows us how well something works, if it’s working at all, or if something else might work better.
What is A/B Testing?
If you are sending out the same emails to your customers over and over again and not getting the conversion rates you need, you want to find out why some emails produce conversions while others are sent to the trash bin. Sometimes it’s a simple change in layout or tone that does the trick, or maybe it’s a certain time of the day that sees the most results.
If a small adjustment could boost your conversion rate, you would probably implement that change immediately. Utilizing A/B testing is a great way to figure out what exactly is influencing your conversion rates, allowing you to make those small changes that can improve your bottom line.
Like the name implies, it involves testing two different versions of an email (A and B) against each other to see which is more successful. Half of your customers inside the sample group are sent version A of a given email, while the other half are sent the B version. A and B are identical except for the factor that you want to test. For example, one sample set is shown a cta that says “Buy Now” while the other set is shown a cta that says “Review Cart”.
Think of it like the control and variable group in a science experiment. The version that results in more opens, click throughs, conversions is determined to be more successful. (More on what defines a “successful” A/B test a little later).
A/B testing offers a glimpse inside the mind of your customer and the results can provide a wealth of information about their behavior. This allows you to make informed decisions about the direction of your email marketing campaigns based on cold hard data, ultimately improving your bottom line. With concrete evidence in hand, you can use precious time and resources more effectively and continuously improve over time.
If knowledge is power and testing is knowledge, testing= power! You will definitely learn more from testing campaigns targeted to the stages in a customer’s life cycle, than a seasonal campaign. Zeroing in on the specific reasons why you are testing will help you better achieve your desired results. Now that you know “why” to test, the only question you’ll need to answer is “what” do I want to test? Testing is scientific so one must have a clear hypothesis before hitting the email marketing lab!
What can I test?
Anything that could have an affect on a customer’s actions can be tested. As you can imagine, the possibilities here are many. (We we will go into more detail on this later on)
A few commonly tested examples when it comes to abandoned cart emails are:
- Email timing
- Subject line
- Copy length and tone
- Sender name / email
- Photography layout
- Color scheme
- CTA Copy
Keep reading and you’ll see detailed examples below.
How do I know an A/B test is successful?
In A/B testing, the goal is usually a 95% statistical confidence level. So what exactly does this mean? A result is deemed to be statistically significant if it is unlikely that the results can be attributed to chance. Do these results reflect a pattern that you can expect to see repeated over time, or is it simply the luck of the draw?
The mathematics behind this exact number are pretty heavy, so we’ll try to simplify it here. There are 3 factors that you need to keep in mind when measuring the success of your A.B test.
- Confidence level – statistical confidence or significance measures how many times out of 100 the results will be in the specified range. 98% confidence level = 98% chance results are reliable and a 2% chance the results are simply a result of random chance. While this is arguably the most important factor in determining reliability, it’s not enough to guarantee reliable results on its own.
- Margin of error – The number with a +/- sign beside the conversion rate is called the margin of error, you’ve probably seen this in any type of political poll as well. If the conversion rate for a given test is 8.25% and the margin of error is +/- 1%, the actual conversion rate will be between 7.25% and 9.25%. The smaller the conversion rate range, the more accurate your results will be.
- Sample size – the number of visitors that are part of the test and the number of conversions that have resulted. As sample size increases, conversion rate range decreases. How large of a sample size do you need? This depends on the metrics of the test itself. If you have a large difference in performance you will need a smaller sample. If you have a smaller difference in performance you will need a larger sample.
If you stop your test early or don’t allow for a long enough period of time, you risk the possibility that the results you are seeing are simple caused by random chance. This also means that you can’t crown a winner until your test has reach statistical significance.
If you stop your test as soon as a winner has emerged from the pack, you probably aren’t getting an accurate picture of the results. The winner of the Kentucky Derby isn’t the horse that was in the lead for most of the race. No, it’s the four legged friend who gets over that finish line first. A/B testing is no different, except for the horses.
Mistakes to beware of when A/B testing
- Not reaching statistical significance – ending your tests too early.
As mentioned earlier, the winner isn’t always the first horse out of the gate. You need to allow enough time for the results to accurately reflect the big picture.
- Not paying attention to external factors
So July and August saw record setting rainfall in much of the country? Chances are you didn’t see the results you were hoping for when testing your summer campaign. The weather, current events, holidays, these can all have an effect on the success (or failure) of a given campaign.
- Testing a weak hypothesis
Are you testing cta colors because you were told lime green is really in right now? Don’t bother. Because tests require enough time to be significant, you must also test for something significant. Like cta copy. Rather than “a green button will result in more click throughs” as your hypothesis, think “a more welcoming call to action will result in more click throughs.”
- Ignoring the small wins
Only one out of eight A/B tests will drive significant change… so don’t ignore the small wins. Rome wasn’t built in a day, so don’t think you can create a great campaign with one measly test.
- Not testing (like ever)
Get out there and test! Really, you’ve got nothing to lose and so much to gain. If your campaigns are sluggish and not giving you adequate or expected results, A/B testing is a tried and true way to get the info you need to start making positive changes.
Now, that we’ve covered the why’s behind A/B split testing. Here’s a range of things we’ve tested with our 350+ clients running abandoned cart email campaigns. Throughout the rest of the post, we’ll also try to include as many real world examples as possible:
The “From Name”
The From Name will likely be the first element of your email campaign that a customer interacts with. It’s your first opportunity to build trust, so don’t let it go to waste. We’ve had success testing From Names that include company name, customer name, job titles, and company departments. Whatever you do, please don’t send from No-Reply.
Here are some examples:
- Company – Acme Widgets
- Company newsletter – Acme Blog
- Company department – Acme Support
- Team member name – Suzanne Reynolds
- Team member name and title – Suzanne Reynolds, CEO
- Team member name and company – Suzanne Reynolds (Acme)
- Team member first name only – Suzanne
- Team member first name and company – Suzanne from Acme
Resource: HubSpot saw a significant increase in open rates when a team member’s name was used as the from name—”Maggie Georgieva, HubSpot”—versus their company name.
The “From Email”
The secondary actor to the From Name is the From Email, or the customer facing email address that your campaign originates from. Most email clients don’t show the From Email until a customer opens. Again, under no circumstances should you use a no-reply@ email address. Abandoned cart email campaigns are your opportunity to continue the conversation with an incredibly important group of customers. You want customers to respond, so make them feel as though their response will be received by an actual human being by using an intelligent From Email.
Here are some examples:
- Department – email@example.com
- Team member first name- firstname.lastname@example.org
- Team member first / last name – email@example.com
- Friendly generic – firstname.lastname@example.org
Since the psychology of subject line copywriting is another extensive topic in itself, we’re going to focus on high level ideas for running tests specific to abandoned cart emails:
- Localization – Create more context for customers by inserting their location in email subject lines.Include Incentives – If you’re offering a discount or promotion, test including a reference incentive in your subject line vs. not including it.
- Urgency & Scarcity – Test adding the element of urgency or scarcity to your subject lines, especially if the discount you’re offering is time sensitive.
- Length – Don’t bother testing here. Despite the rationalized dogma and faulty statistical analysis out there, subject line length has no effect on open rates.
- Personalization – Test inserting your customer’s first name into your subject lines. Specificity usually wins in email marketing but this case,proceed with caution. We’ve executed several tests on behalf of our clients where open rates actually decreased when personalized subject lines were used. This report from MailerMailer also indicates that this is not a guaranteed win. In the same vein, you can test including the name of one of the products that was abandoned by the customer as well.
- Ask a Question – We’ve found that asking customers, “Was there a problem?” or “Did something go wrong?” is an effective way to generate qualitative customer feedback about friction points they experienced during checkout. Given how effective it is at eliciting responses from customers, try asking a question in your subject lines to see if it impacts open rates.
Resource: The team at Phrasee analyzed 700 million emails and compiled their findings into a an awesome report report called, “Email Subject Lines the Sell”.
Email preheaders provide an opportunity to communicate a secondary supporting benefit / message in the email preview pane. Think of preheaders as the supporting actors to your subject lines. You can test different facets of how your preheaders support the subject line, what benefits they espouse, and to what degree they use personalization.
Resource: Our team recently published an exhaustive piece coding and optimization ideas for email preheaders on our blog.
The styling of an email greatly impacts how customers perceive the message. Try testing a simple, plain-text variation of your abandoned cart campaign against a heavily designed HTML email. You might find that customers respond favorably to the creative that doesn’t appear as “produced”. This is particularly true for businesses that are primarily B2B.
Call to Action Copy
Try testing the copy of your call to action buttons. It’s best to test one call to action change per test, but this test will answer the question, “Does “View Cart” or “Complete Your Order” drive more clicks?”.
Resource: A recent test we ran on behalf of a Rejoiner client showed that the call to action copy “View My Cart” outperformed “Complete My Order” by 33.01% at a 99.36% confidence.
Product Image Variations
If you’re dynamically employing product imagery in your abandoned cart campaigns, you should consider testing how these products are laid out within the template. We’ve found that inserting one larger image of one item in the cart can perform better than a table of all items with smaller thumbnails. If you have great product photography, consider showcasing it with larger images in your emails instead of smaller thumbnails.
Closely related to the layout of the product photography, are the business rules you use to decide which products are added to the template. Test with the most expensive, least expensive, or last product that was added to cart to see which resonates more with your customers.
We tend to adhere to the mantra, “One click, one goal” when it comes to our campaigns, but many of our clients like to include some site-wide navigation in their email templates. Try testing a template with navigation against one without to see if it increases click-through or detracts from your main conversion goal.
Recommendations describe any products that are inserted into the email based on what you know about your customer. This could include recommendations for products that are complementary to the ones they abandoned, or their past purchase preferences. Upsells, cross-sells, similar products, or products that are frequently purchased together are all examples of different forms of recommendations.
Recently, we’ve seen some awesome examples of engaging, animated emails in the eCommerce world. Animation adds an additional interactive element to your campaign, which may lead to more click-through. Test an animated campaign vs. a static one.
Social proof can take many forms, but for eCommerce sites, it is most often user generated feedback/testimonials from customers who had a great experience with your business. Social proof can also focus on high level metrics, such as: how many customers you serve, the time you’ve been in business, or any facts about your company that would make a new customer feel more confident about giving you their business. Regardless of the type of social proof and how you include it in your email campaigns, try testing a campaign with and without social proof to see how your customers respond.
Customer Service Signature
We talked about testing the “From” name and email earlier. The closing signature of your email provides another opportunity to test infusing a more human element into your campaign. Take the opportunity to make your customer feel like they are getting extraordinary customer support from a real human being. You may also want to test including a photograph of someone on your support team or an email with a handwritten signature.
Personalization can include any tactic that is intended to create a more contextual experience for customers. This involves applying what you already know about their demographics, purchase preferences or previous history on your website. Some testing ideas around personalization include using a customer’s first name in the subject line, preheader, or salutation of the email. You can also personalize your campaigns based on geography, weather or buying affinities.
Offers & Discounts
Whether or not to include an offer/discount in an abandoned cart email program is a continuing dialogue in the eCommerce email marketing community. Our stance is to exercise caution when using offers. However we see the value when they are employed in an intelligent fashion.
Here’s our philosophy: Offers shouldn’t be split tested until they have been hold out tested first.
Hold out tests seek to answer a very different question than split tests – they are primarily designed to compare the value of a test group (who receive a campaign) against the value of a control group of customers (who receive nothing). The comparison is using a “revenue per customer” metric and measures the true lift of sending a campaign vs. not sending it. A hold out test also measures how much “room” you have to offer a discount based on that lift.
Hold out tests protect you from a very dangerous scenario: One where you are giving away a discount to customers who were going to purchase anyway. A campaign may look like it’s performing amazingly well from a response (opens, clicks, conversions) perspective, but in fact you are losing money because these customers would have made the purchase anyway.
Once you’ve verified that an offer is in fact generating real lift, you can split test other offers to see which drive a stronger response. We recommend running hold out tests once per quarter as a way to maintain accountability for the profitability of your email campaigns. This is a core feature of Rejoiner and we hold ourselves accountable by running these tests for our clients.
Offers & Discount Structure
If you choose to employ one, get creative with how you package your offers. Test offering a fixed dollar amount (Save $10) versus a percentage off (10% Off). We’ve also had success using those approaches while applying the discount only to shipping costs (Save $10 off shipping) or only to specific product categories ($10 Off All Camping Products).
As a follow up to offers, we are often asked about the best practices for structuring promo code syntax. Should the code be a string of letters and numbers that look unique (XYZAZ2121)? Or something memorable that a customer can commit to memory (COMEBACK10) and use during their next purchase? Our answer is always of course to test both approaches. Whichever route you take, we do recommend setting up a bank of one time use codes to ensure that special offers don’t become public.
Creating a sense of urgency is a tactic you can use to increase the likelihood that a customer will act on your email campaign. Time sensitive offers and low stock indicators are the two most effective uses of urgency we’ve seen for abandoned cart email programs. Test these in your campaigns by employing urgency in your subject lines and body copy.
We are often asked about when each email in a follow up should be triggered. (For more in depth information, check out our guide on this topic.) New customers are usually looking for a response like, “Send an email 21 minutes after the cart is abandoned and you still see the highest rate of conversion.” Unfortunately, we’ve found that email timing doesn’t really matter, as long as you send within the first 60 minutes of the cart being abandoned.
In the scenario I just described, we are triggering an email at a user-defined interval after an abandoned cart has been tracked. Subsequent emails fire in a similar fashion, all dependent on when the cart was first abandoned.
One could conceivably test other approaches to email timing; such as holding the first email until a specific time based on the customer timezone i.e. Send Email #1 at 9 AM, PST the following day. With this approach you’d be testing the performance of interval based campaigns vs. choosing a specific time of day to send.
Segmentation is another expansive topic and the discussion varies greatly depending on what business you’re in. For the purposes of developing A/B tests for your abandoned cart email campaigns, we see segmentation employed in a few different ways for testing:
- Testing different creative for high value vs. low value transactions
- Testing more aggressive offers for high margin product lines
- Testing offers specific to brands or product categories
Bonus: Gmail Quick Actions
If you’re a Gmail user, you’ve probably noticed quick actions being attached to some of your incoming emails in the preview pane. Though we haven’t seen these being used in an eCommerce setting all that often, we’d love to test using quick actions for abandoned cart emails. Stay tuned.
A/B testing can uncover what exactly it is that makes your customers tick (or click in this case). It’s important to know what your objectives are. Do you want higher open-rates? Do you want to increase click-through rates? Are you trying to find what makes your customers convert?
The options available for testing are many, so stick to testing one facet of your campaign at a time and you will be rewarded with a wealth of information about your customers. Information that can be used to drive more successful campaigns and make intelligent marketing decisions based on cold hard facts.
We’ve tested many of these ideas for our clients and have generated hundreds of thousands of dollars in incremental returns as a result. The evidence is plain and simple: A/B testing gets results.
What A/B tests have you tried to optimize your abandoned cart email campaigns? We’d love it if you shared your experiences in the comments below.