Reading & Using the Google Analytics Model Comparison Tool

An Overview of Attribution Models

When we were starting StatBid, I wanted to make the company motto "Stop guessing."  We decided it might be a little too aggressive, but it lends some insight into the way I approach data-driven marketing.  That's why I feel that topics like multi-touch attribution are completely under-represented among conversations within digital marketing circles.  There are a few out there evangelizing the exploration of the nuances of the problem, but very few have the time or patience to really grok the whole problem.  From there, I've encountered a lot of "aww, but Roy, there's no way I could build what you have with my development resources...".

In Google Analytics, under Conversions, there's something buried under Attribution, called the Model Comparison Tool.  Sure, it's less flexible than some custom solutions I've had the pleasure of building, but when it comes down to it, it gets 85% of it right, is free, and is way easier to use than the SQL-driven beasts I'm used to wrangling.  It's important that you get in there, and get comfortable with it--it's trying to help you solve a major problem you have right now, whether you're paying attention or not.

Bob Ross
Bob Ross

First, let's assume you have a basic understanding of the difference between common models.  If not, jump over and read Avinash's masterpiece of an overview, and it's even specific got Google Analytics!  If that's a bit TL;DR for you, I even wrote a lighter version of it myself, which you can take a look at over here.  Essentially, you're splitting up the revenue of your orders among the touch points that lead to the conversion, and rewarding the channels differently based on the model.

That's where this tool comes in so handy!

Pull up Google Analytics in a new tab.  Go on, do it.  This is more fun if you follow along.  I'll show you how to not only use the basic functionality of the tool, but walk you through the implications it should have on your marketing budgets for your channels!

When you first get started, it'll just pull up a "Last Interaction" chart for the date range you have selected.  This is also known as "Last Touch Attribution" and is known for over-rewarding things like coupon affiliates and direct page loads.  But don't take my word for it--let's start comparing!

Select Linear
Select Linear

This will then tell the tool that you want to really understand the differences between Last Interaction and Linear, where the revenue is evenly shared by all contributing touches along the shopping path.  It should look something like this:

Model_Comparison_Tool_-_Google_Analytics
Model_Comparison_Tool_-_Google_Analytics

This data is from a relatively small client, but demonstrates the diversity of the contributing touches that even they're seeing.  One thing that stands our right away is that Google Analytics includes Organic Search as a channel in these models.  This is actually pretty cool, and wasn't in most people's original models a few years back.  After all, there were no per-interaction (cost per click, etc.) costs associated with Organic, so it was often overlooked when it came to reinvestment.

As such, you can see here that Direct is more frequently found at the very end of a shopping path--that's why you see it carrying a higher number of conversions according to the Last Interaction model.  When you move to a Linear Model, that drops off, and that credit is spread back out over the other channels--especially Organic.  While it's valuable to distinguish Organic from Direct, as one is the result of SEO, and the other arguably isn't, when you add up Organic and Direct, they're both in about the same ballpark between the two models (only a 2.5% difference when aggregated).

The interesting bits kick in below those two--Paid Search, Email, and Referral.  This means that Paid Search is contributing a bit higher in the funnel than just the last touch, same with Email.  If you were allocating marketing spend based on just Last Interaction, you'd be missing opportunities in those two channels, as they aren't only driving value at the end of the shopping path.

The Referral channel in this case, were I to guess (and I am), is probably guilty of having some coupon affiliates.  When a shopper is in your cart, and sees the little coupon box, and goes back to Google to see if there's a coupon out there for your site... does the site that's hosting that really contribute much to the sale?  This interaction then captures the coveted Last Interaction, and what ever drove them to your site before that point is ignored in the first model.  But not in the Linear model!

Shenanigans
Shenanigans

The fact that the Linear model would credit the Referral channel with 22.5% less revenue suggests that the channel has a strong bias toward the Last Interaction--this is behavior that we trained into the market for a lot of years, and we're only just now waking up to the shenanigans it causes.

But where it gets really interesting is comparing two not-terrible models, because, to be clear, Last Interaction is a terrible model.  Let's try something more interesting!

Model_Comparison_Tool_-_Google_Analytics
Model_Comparison_Tool_-_Google_Analytics

In this example, we've graduated Linear to the defender, and Position Based to the challenger.  Position Based is basically a model that still rewards closers (the siren's song of Last Interaction), but allocates another big chunk to lead generation (First Interaction), and spreads another share out over the rest of the touches.  It's something I used for the better part of three years, when the product catalog I was working with ranged from $2 to $20,000, as it was a pretty broad net for marketing opportunities.

But how does it compare to Linear?  Well, Linear is already a not-terrible model, at least compared to Last Interaction, so the differences aren't as drastic.  But what happened, Display Channel?

Display is not for Cats
Display is not for Cats

Well, if you recall, Display was about flat between Linear and Last Interaction, so what's going on here?  Well, if Display were mostly at the last touch, it'd have an advantage on the Last Interaction.  If it were often the first touch, then it'd have an advantage with the Position Based model.  This means that the advantage that Display has is somewhere in the middle interactions--these get more credit with Linear than with Positional, after all!

So, why is that?  My money would be on this is a Display campaign focusing on retargeting traffic that'd already visited the site, not prospecting.  Further, I'm betting that it's a broad retargeting campaign, and not just a cart abandonment program.  That's because a cart abandonment program would have more action near the end of the shopping cycle, and do better when there's more weight on the last touch.  If this is a category abandonment program, however, then the ads are being shown to people who are still researching or considering their purchase, and will revisit the site again some of the time before they actually convert.

From here, you should play around with comparing all of the models, till you get a sense of the types of differences I've touched on here.  Once you've done so, it's time to customize your own models.  You'll want to take into account the eccentricities of your audience, and business, and pick a model that makes sense for how you want to grow you business.  After all, once you settle on a model, it will be come a sort of self-fulfilling prophecy, as your reinvestment where there are strengths will make them stronger.  You can then come back from time to time to compare models again to verify that your model is behaving the way you'd interpreted the last time around.

It's an iterative process, but if you start to use this data to inform how you're reinvesting your revenue, you'll be more certain that you're putting your money where it can do the most good.

Have questions?  Drop me a line via the Contact form--I love to talk shop.

Up and to the right!