Is Google Analytics the right tool for measuring email attribution?
In this post we’re going to explain what email attribution is and why Google Analytics (GA) is the wrong tool for measuring the true impact of email correctly.
Google Analytics is a great tool, but if it’s your primary way of measuring email data, it’s going to lead you in the wrong direction.
Helping us illustrate this point, I’d like to introduce you to Huckberry.com.
Huckberry, an e-commerce brand that sells men’s clothing and outdoor gear, sends some of the highest quality emails we’ve ever seen. Their customers open and read their emails religiously—even if the subscriber hasn’t purchased just yet.
Some people like to sit down and read paper in the morning. Huckberry fans like to sit down and open their Huckberry emails from: email@example.com.
Huckberry uses email marketing as a tool to showcase its awesome products and content, and as a way to grow its subscriber base.
It’s impossible to measure the effectiveness of each and every email they send because the sum is greater than the whole of its parts. Their email program is the rising tide that lifts the ship.
I sometimes open their emails when in search of cool new products. And most of the time I don’t buy anything, but due to the high quality nature of the emails they send, it keeps me coming back for more.
Huckberry doesn’t need amazing subject lines. Their “product” (the emails they send), are enough to keep me coming back.
And while Huckberry won’t see conversions from every subscriber on every single email, they’re building an audience of people that look forward to seeing Huckberry emails in their inbox every few days.
However if the only data they looked at was in Google Analytics, they might be led to believe that their emails were failing.
(If you haven’t already seen this super-in-depth, critical deconstruction of Huckberry’s email marketing strategy, check it out).
What Is Email Attribution?
Email attribution is simply understanding if your email program is what caused a customer to buy, or if it was something else.
Here’s a simple example…
Imagine a customer on your email list decides to make a second purchase without any special discount. But before they buy, they decide to check their inbox and see a discount coupon offering 10% off.
“Awesome!”, they think to themselves. “I was going to spend $200, but now I get $20 off so I only have to spend $180.”
From their point of view, they saved some cash, which is all well and good. From the point of view of Google Analytics it looks like that email did a great job at encouraging a sale. “We need to send more of these emails!” is the conclusion of the digital marketer.
Do you see what’s happened here?
We’re not saying offers don’t work, because they do.
- Sometimes they aren’t necessary.
- The way Google Analytics attributes sales to email doesn’t help you answer the following question: “Is our email program the reason a customer made a purchase, or was it something else?”
The only way to truly understand if your email marketing is the reason people purchase is to answer this question:
Is a cohort of customers who are consistently exposed to high quality email marketing over time more valuable than a cohort who was not exposed to any emails?
To measure that, you can’t use Google Analytics.
You need control groups.
Control groups can help you compare the value of a group that received emails to a group that received no emails, which helps you understand if your marketing efforts are actually providing true incremental revenue lift, not if they’re just a touch point on the way to a sale.
To understand where Google Analytics falls short, we first need to understand the different types of default attribution models GA provides.
Google Analytics Attribution Model Types
The Last Interaction Model
This model attributes 100% of the conversion value to the last channel with which the customer interacted before buying or converting.
The Last Non-Direct Click model
This model ignores direct traffic and attributes 100% of the conversion value to the last channel that the customer clicked through from before buying or converting. Analytics uses this model by default when attributing conversion value in non-Multi-Channel Funnels reports.
The First Interaction model
This attributes 100% of the conversion value to the first channel with which the customer interacted.
The Linear model
The Linear model gives equal credit to each channel interaction on the way to conversion.
The Time Decay model
This model is based on the concept of exponential decay and most heavily credits the touchpoints that occurred nearest to the time of conversion.
The Position Based model
This model allows you to create a hybrid of the Last Interaction and First Interaction models. Instead of giving all the credit to either the first or last interaction, you can split the credit between them.
Google Analytics attribution models work well in helping us understand how email contributed in partnership with our other marketing.
But it doesn’t tell us if a cohort of customers who are consistently exposed to high quality email marketing over time are more valuable than a cohort who was not exposed to any emails.
And this leads us to the discussion of how we attribute engagement within an email.
So What’s The Right Way To Measure For Email?
At the end of the day, Google Analytics is best at tracking behavior on a website. In email, there are engagement metrics that happen before the customer ever hits the site. While a click will be tracked by your email software and registered as a pageview by Google Analytics, the recipient has to open the email first.
Opens are an incredibly valuable metric because it means that people are exposed to your brand and products, even if they don’t buy. However the lookback window of Google Analytics doesn’t attribute opens as engagement.
A lookback window is the timeframe in which an order must occur for Google Analytics to give credit to the channels that were involved in the sale.
The “typical” way that most email software works is to use a 15-30 day attribution window based on open or click engagement. This isn’t wrong but it limits our ability to measure the “halo effect” of email, where the halos are the additional ripple effects of exposing a group of customers to marketing vs. not exposing them.
The “right” way to attribute lift and to answer the key question—”Is your email marketing program the reason people are buying?”— is to run a holdout test.
Holdout Testing: The Simple Method for Measuring Your Email Program’s True Impact
Holdout testing, involves “holding out” a campaign for a “control group,” also known as a group of customers who are excluded from a marketing campaign.
You measure the purchase behavior of the holdout group (the customers who receive no emails) after 90 days, ultimately comparing their value (using the ‘revenue per customer’ metric) to the value of the group that received marketing.
A holdout test is designed to answer the questions:
- How much revenue lift is being generated by sending this campaign?
- Would these customers purchase anyway even if we didn’t market to them?
Holdout Test Example
The example above shows that by holding out (not sending) this cart abandonment campaign to a portion of customers (the control group), they’re seeing revenue per customer.
It doesn’t matter whether these people open or click the emails, it simply shows the value of sending this campaign versus not sending it.
In Huckberry’s case, if they stopped sending emails, their business would likely collapse. The reason being is that their emails are more of a digital shopping experience and online magazine that people love to read and enjoy.
How To Apply Holdout Testing To Your Business
You don’t need fancy email marketing software to learn from control groups.
In fact, creating a control group is as easy as:
- Bringing your customer list into Excel
- Filtering based on a parameter that will capture a randomized group of customers
- Excluding those customers from the marketing channel being tested.
We recommend capturing a control group that represents approximately 10 percent of the audience being marketed to.
You should do this for every specific campaign:
To make an accurate comparison, you’ll need to be able to track how much revenue has been generated by customers in each group. After 90 days, measure ‘revenue per customer’ in each group and compare.
If you’re using offers, run a holdout test to determine what the true lift of giving the offer vs. not giving it.
You may find that you generate more transaction numbers by giving a discount, but you need to compare that to the group that come back and purchase without any offer.
The results may surprise you, but you won’t know until you test it.
It’s very easy to make offers and see conversions go up, but what if you’re actually just shrinking profits instead?
Run the test and let us know on Twitter how it went. We’re really excited to hear your results!
What to do now
Read another post on holdout testing.
Subscribe to our blog for future post alerts.