E-commerce

StatCave 9: Moosejaw UX Review

It's 2018, and while eCommerce websites haven't really needed to change much in the past decade (compared to the previous decade, certainly), it continues to amaze me how rough so many shopping experiences actually are.  Why do you shop on Amazon (most of you do, so do I), as we are all doing so more and more, even as their price advantage is slowly traded in for profitability?  We trust that we'll be able to accomplish our goal (find and buy a thing) with as little pain as possible.

So, if we want to compete with Amazon in the slightest, we will need to do something remarkable to earn those shoppers.  But before we worry about "crafting a brand story that includes the customer", or "engaging with our audience in the way they want to be engaged," I propose we get some fundamentals right, first.

To that end, I did another secret shop, identifying opportunities in one shopping experience that are almost certainly relevant to your site, as well.  This episode's victim... um, I mean recipient of this free site audit is outdoor retailer, Moosejaw!  They were acquired by Walmart in early 2017, and we'll see whether that combo works for the shopper!

Today's exercise was shopping for a new pair of hiking boots for myself.  My current pair of Keens have had a long and illustrious career, carrying me through countless adventures over the past few years.

And, so, these boots have more than earned their retirement, but before I send them off to the boot farm, I'd better recruit their replacements!

Quick, to the Googlemobile!


The Field of Competition: Distinctly D2C?

 Notice something about the Shopping results?  No retailers, just Direct-to-Consumer brands.

Notice something about the Shopping results?  No retailers, just Direct-to-Consumer brands.

Direct to Consumer is a massive trend, for obvious reasons.  Retail, as a concept, has been primarily a geographic arbitrage game for the past few centuries.  You find a product that's less common somewhere else (or otherwise in greater demand), and you make a profit on the difference in perceived local value.  It's a business model that is as old as currency, yet erodes in the face of online shopping and almost unbelievably fast home delivery.  

DtC companies are the product, so their value is in the product creation itself, not just the markup on the materials.  Further, they're insulated from Amazon's predation, as there's no middleman  to cut out, and buy direct.  

Still, this was more severe than I expected.

I'm a huge fan of the DtC (sometimes labelled "D2C", although I'm going to stop doing that), but retail does provide one significant feature that DtC can't--cross-brand comparison shopping!  And so, I wanted to shop on a retailer, as I'm not sure what brand of boots I need, at least not yet.

And, so, I clicked on the Moosejaw text ad, rather than a Shopping result.  This also is going to drop me on a category page, which is more appropriate, given my broad search.


Eventually... a Landing Page!

 After an absurdly long load time, this is the landing page I'm presented with.

After an absurdly long load time, this is the landing page I'm presented with.

Load time was badly weighed down by the sheer number of asynchronous javascript bringing in page elements, long after the initial render.  But once the sun started to expand into its red giant phase, the page finally finished, presenting me with nearly half the screen informing me of what I had been assuming was a given...  FREE EXCHANGES ON FOOTWEAR.

This is the first instance of Moosejaw's rough, awkward, and kind of forced sense of humor, with the link to "read all the dumb rules and details."  I get it, you have a job, and there are things that your legal team make you do, and you're sticking it to the man.  The shopper's going to love that, right?

Not really.  What you've done is broken the fourth wall, and shown something uninviting behind the curtain.  

 
2018-07-20_11-16-39.png
 

If you want to humanize the brand, be sure that the culture you're revealing is a good thing.  Like Wistia's Annual Rap-Up video, or on the more serious side, Patagonia's activism section.

But, to be fair, their eye-rolling humor did catch my attention!  That's good, right?  Well, it might be, if it were on a call to action, but it was on a distraction.  If you click on it (as I couldn't prevent myself from doing), you're tossed to the very bottom of the Terms and Conditions page of the site, where it's not really clear what's going on, till you realize that you're looking at a clause on footwear exchanges.

 This is where the path they've laid out for you leads... exciting...

This is where the path they've laid out for you leads... exciting...

To make matters... (sigh) so very much worse, that banner click failed to open into a new tab, utterly derailing my attempt to shop for boots, and kicking me entirely out of the funnel.

What's the point of this site again?  To get me to buy something, right?  

Fine, I'll click "back" (itself a symptom, not a solution), and find that Moosejaw missed something else obvious.  Remember how I searched for men's hiking boots, and the Ad itself specifically called out that it was for men's boots?  Well, the landing page isn't narrowed to men's boots.  

That may seem minor, but it's also ridiculous to get wrong.  Even if the Ad itself was using Dynamic Keyword Insertion, there's nothing stopping them from attaching the appropriate landing page URL to the keywords that would be best served by a filtered landing page.

The ajax garbage ran amok again, loading the page in a little over 8 agonizing seconds, only to spend another four loading what appeared to be a floating footer bar, that was covered up by the status of the browser (at least in Chrome).  This means that bar is covering itself up by trying to load more crap into the bar?

2018-07-20_11-42-24.png

While trying to figure out where the links in this bar go, the mouse-over status bar did exactly the same thing, making it about as useful as a poke in the eye.

And then... in the middle of the bar (in the right of the screenshot above), what is MADNESS?

Madness (n): Extremely foolish behavior.  For example, linking away from your shopping experience on every page, giving your shoppers more opportunities to get distracted.

2018-07-20_11-48-40.png

I get it, they're still just trying to have fun with their brand.  But it feels really incoherent.  It feels like the "personality" of the brand is just what is squeezed between the bars of their corporate overlords.  It's almost like it was a cool brand, and maybe it was shredded into absurdity by the acquisition by Walmart?   


Finally back on topic...

Despite that MADNESS link also failing to open to a new tab, I managed to stumble my way back to the category page.  

At this point, weak product data started to become really obvious...

Data was really inconsistent, and it was really apparent when you looked at the filters on the category page:

2018-07-20_11-54-21.png

The fact that there was only a single "Size", and a ton of "Footwear Size" carrying products, suggests that the "Size" value was an artifact of some other product data schema.  This won't be the last time it seems like Moosejaw's catalog is badly cobbled together from manufacturer-provided data.

When I was a retailer, we had a rule: Never trust data from outside our own building.  This meant we had to do a lot of product content work before we could add products to the site, writing original titles, descriptions, and ensuring complete data across the relevant fields.  Lots of work, but the result was a site that positively exudes expertise and confidence.

This was the other thing.

But their category filters weren't all bad.  They did at least have a custom price range capability, which was better than I could say for Home Depot and others in my last UX review.

2018-07-20_12-00-41.png

Another field was what I assume represented Customer Ratings, but they decided to call these "Custy" ratings.  Was I able to determine that they were talking about Customers?  Sure.  But it was really awkward.  It's like an in-group joke, which might seem like a great idea for returning shoppers, but as a new shopper, it says "you're not one of us, and aren't welcome here."  Savvy?

Oh, and on the ratings, they were all "and up" values, which is normal.  Except, they were displayed as checkboxes, rather than radio buttons, despite being a select-one interface element.  Is it that hard to use the right interface element type for the right job, Moosejaw?


My ongoing struggle for relevant products...

2018-07-20_12-06-45.png
2018-07-20_12-19-42.png

Sorels are amazing, but they aren't hiking boots.  We have to select "hiking" in the "best use" filter in order to turn the Hiking Boots page into a ... hiking boots page?

Once properly filtered to... try to stay with me, now... men's hiking boots, I found something pretty disappointing.  They simply don't have anything resembling a good selection of these products.  Most of the few boots they did carry were designed for cold and wet hiking, which is great, but not what I need.  I live in Phoenix, "waterproof" is a negative for me, as I'd rather have the additional breathability for hiking in the desert.  


Oh, they have a Compare-To feature!

On the downside, it looks like I designed it.

First, how about that red notification bar, that encroaches on the "Compare Products" header?  Classy, right?  How about the awkward amount of whitespace, paired with all of the other elements that are claustrophobically crammed together?  Oh, and then there's our favorite, the usually-unreadable floating bottom bar, obscured by the browser status, as usual.

But as soon as you scroll down, it gets worse.

The product data was horribly hit and miss.  Many of the fields were only populated in one or two of the columns, resulting in a complete lack of confidence that I was even looking at valid data, let alone anything complete enough to drive a buying decision.  Even the data that was there was anxiety inducing, as one of these pairs of boots claimed to be more than twice the weight of the others.  Possible?  Sure!  But unlikely enough to chop another ten points off of our shopper's confidence score.

Further, when you clicked the "...more" link in the product description previews, rather than expanding to show the remaining available text, it takes you to that product page!  Would you be surprised at this point to hear that it wasn't in a new tab, either?

I think Moosejaw may be entirely unaware of target="_blank" as an option for links.

So, you want to really compare these products?  You're on your own to open new tabs, like a neanderthal.  


Product Page: Pain Points Present

Once I accepted my fate, and found myself on the Product Detail Page (PDP) for one of the boots I was investigating, it was apparent that little thought went into it beyond implementing the template provided by the eCommerce platform.

First, it was horrifyingly slow, as most of the pages had been.  Second, there was only one color available, and it wasn't preselected.  

2018-07-20_12-42-59.png

Just in case we were getting far too close to a buying decision, Moosejaw roars back with another ambiguity to throw us off the scent.  There's only one color available, "Ebony / Gargoyle", but there are photos of at least two different boots--one olive-gray (shown above), and another that was all-gray.  Ebony is allegedly a black color, which neither boot actually is, and Gargoyles are generally made of stone, so... is that the all-gray one?

So, I resorted to clicking the "Need Help" bug on the left of the product page, and get this:

2018-07-20_12-53-16.png

I guess it's supposed to be cute.  It seems badly forced, and only slightly less awkward than karaoke at an office party.  And what happens when we open this chat?

2018-07-20_12-56-05.png

I get that you might want email address to follow up with someone who is disconnected mid-chat, but I started to suspect that wouldn't be the end of it.  

Spoiler Alert: They f***ing spammed me later that same day.

So, without granting them permission to mail me at all (right?  RIGHT?), one of their support reps helped me.  That part of the experience was pretty solid.  The rep couldn't seem to tell which tab I was on, so it took a moment to get on the same page, but he did confirm for me that the olive-green photos represented the mysterious "Ebony / Gargoyle" color designation.

With that information in hand, I started to dig into the product photography.  The good news is that the actual photographs were really quite good!  This is also true of their 360-degree view, which requires a special rig to shoot effectively.  

The product page's gallery, on the other hand, failed to take advantage of that one shining beacon of competency, relying on a buggy, twitchy zoom-scroll interface, rather than letting me see the entire, full-resolution photo at once.  

I guess they're afraid that I'll steal their product photography?  I mean, that happens a lot, and I've had to defend my site from such hooligans, but never at the cost of the experience for the actual shoppers.

I spent so much time trying to get that photograph open in a new tab that I accidentally confused Chrome, and found myself stuck with an open Console pane, which locked me into their Mobile version of their responsive template.  

It wasn't great, either.

2018-07-20_13-08-45.png

Once I closed this tab, and started over, I encountered several more prolonged periods of "...waiting for cache...".  It was almost like Moosejaw didn't know that caching was supposed to speed things up.  If the browser was referring to local cache, something was misconfigured.  If the browser was referring to remote cache (image servers, in-memory object servers, etc.), then they weren't doing a very good job of saving load time.

Once the page was loaded again, in the right device view, I decided to try to watch the embedded product video for the Asolo boots I was viewing.  I expected it to describe the unique features of that product line.

Instead, it was a painfully boring video narrated by an anthropomorphized barbiturate.

2018-07-20_13-30-13.png

When I tried to go back from a product page to the Category page, I got hit by this:

No, Moosejaw, I'm not seeing what I want.  But more importantly, you threw a browse-abandonment modal window at me when I wasn't abandoning, but rather just navigating around your site.

Your modal manager (be it a third party app, or developed in-house) should be able to tell what browser I'm using, and therefore the difference between a mouse movement toward the URL bar (an upcoming exit), and the back button (a navigational click).  Just a mouse moving toward the top of the screen isn't enough to verify intent, and this use case was a hard miss.

Upon returning to the category page through the site navigation, my Compare-To selections were not preserved.  So, I had to recreate my selections, in order to continue my shopping.

At this point, I've already decided not to buy from this site, but I was in for a penny, in for a pound, so I added the closest option to the cart, and tried to charge ahead.


Cartastrophe

The cart was busy, ugly, and unremarkable.  It fell victim to one of one of the classic blunders!

VizziniDeath.gif

And yet...

2018-07-20_13-37-24.png

Do you need to make it easy for someone with a coupon to enter it?  Yes.  But you can do that with a little collapsed entry field.  "Have a discount code?" is a prompt that I've seen deployed successfully, and the absence of the code's field was effective enough to win an A/B test by a handsome margin.

They do this again in the checkout, just in case you missed your opportunity to leave the site and never come back.

The rest of the checkout process was pretty mediocre.  It was obviously driven by their platform's limitations, not the user's best interest.  The billing/shipping address controls were inconsistent and buggy.  The validation included notifications that would auto-dismiss before you could read them.  There were pop-over messages that showed up multiple times, uninvited.

Worst of all, it expects YOU to do more work that they should be doing for you.

2018-07-20_13-42-10.png

Throughout this wreck, it kept making reference to ship dates.  I don't care about ship dates.  Not in the slightest.  I care about delivery dates.  

On the topic of delivery...

2018-07-20_13-46-02.png
kill_them_all.gif

I gave up.  It was too much of a struggle to deal with this site.  They expect me to do all of the work of product comparison, fight through their nearly-non-existent product data, do basic math for them, and then they try to trick me into the cheaper shipping option... for them.

No thanks.

The entire site was so sluggish, and prone to extended ajax loading pains, that I had to see what the deal was.  BuiltWith reported that they were built on IBM Websphere, which may indicate that their ERP was wagging the dog, resulting in a mediocre web frontend.  However, they also had a laughable list of installed vendor tags.  No tag manager in the world can save you if you rely almost entirely on third party plugins for your site's functionality.  I bet they could have even tested each feature individually, each with a positive A/B test, but failed to realize that if you pile enough straw on a camel's back, eventually, that poor critter is going to be crushed into a fine, pink mist.

Oh, and just to top everything off, guess what?  

2018-07-20_13-54-43.png

Final Grade: D

I've seen worse, so I need to reserve the F for those truly remarkable disasters.  However, this was still a failure, so I can't give them even a C-, either.  They'll have to take this class over again. 

willy-wonka-you-get-nothing.gif

Their site fought me at every step, their load-times were inexcusable, and then they violated CAN-SPAM in order to send even more annoying garbage at me, even after I'd fled for the hills.  And this isn't user error--I literally have a video recorded of my entire shopping experience, so it's unambiguous that no consent was given along the way, even on accident.


If you'd like a UX review of your site, let me know at roy@statbid.com!  If I select your submission, and you'll let me share the results (brave enough?), they're free.  If you'd like it just for internal consumption, then we can discuss other arrangements, as well.  

I love talking shop, so let me know what you think!  Was I unfair?  Were there other ways to improve Moosejaw that I overlooked?  

StatCave Episode 2: Low-tech Inventory and Margin Planning

 

I pack my favorite tricks for maximizing the overall margin of your inventory into less than nine minutes!

If you're managing a catalog of more than a couple dozen items, then I'm certain you've struggled with how to balance purchasing, pricing, and profit. These are some of the simplest ways I've seen to tackle that, no data scientists or ERP required.

Video Transcript:
Big companies have the advantage of being able to look at an ERP that can do all kinds of inventory forecasting, but most of us don't have that luxury. In fact even if you're doing up to $100 million a year in revenue, you still might not be sitting on top of a fully automated stack for inventory management. I'm going to show you some tricks, this is all stuff you can do with spreadsheets, that should get you most of the way there and help you to take advantage of a few opportunities as they come along.

In order to figure out how many units you need to have in inventory to meet demand for, say, the next 90 days, you can use three pieces of data to come to a reasonably good estimate. The first two are from last year. That's the 90 days leading up to the point that we're at, and the next 90 days of last year as well. You compare these two to get a sort of sense of the seasonality on the calendar for this item. If we know that the next 90 days is double what the previous 90 days was, at least in terms of last year, then we can use that as a reasonable estimate to say that the next 90 days should be about double what the previous 90 days was. Using exactly that method, we can arrive at a pretty good ballpark that should be strategically valuable for us.

What if it's a new item? This is something in the catalog that was added in the past year, and so we don't have year over years comps for it. That's not a disaster. You can still use other items in the same category, you can use items that are in the same brand, or preferably both to get a sense of how the shopping around those types of products are moving around. That gives you at least pretty good guardrails for getting to the right number.

Let's say that you now have the units in inventory and you're looking from here out to the end of your season. You want to make sure you sell through all of the current inventory before a certain date. What you can do is determine your current estimated depletion date. That's the date where you expect to run out of units. Then if that date is ahead or behind your end of season date, then you can make decisions from there. In order to figure out what your depletion date is, it's pretty simple just to do a linear extrapolation. All that means is you're taking effectively the average change over time and just carrying it out in a straight line. That'll get you in the right ballpark.

Obviously, the volume is going to change from that to some degree and you end up with charts that are a little less linear, but if you just take the first week and the last week, I'm using weeks in this example, and you just draw a straight line through those, that ends up being the same as taking the average change. The fact that there's noise in between those doesn't matter too much. This'll obviously change as you look at it week to week, but they all should basically point in the same direction, and you end up with a fairly predictable outcome.

Once again, if this is a brand new product and these are, say, the first 100 units you've ever had on the site, and you don't have any way to estimate the sell-through rate, you're going to have to do a couple of things. One, you can obviously wait and see how fast they go. Even with just a couple of weeks of data, you can very quickly come up with a reasonable ballpark of how many months that inventory is going to last. The other thing you can do is, like we did with the buying, you can compare it to siblings in the same category or from the same brand. That'll get you in the right ballpark. Now let's say you've done that already, so you have your depletion date. If your depletion date is earlier than your anticipated and targeted end of season, then that means you are probably going to run out of inventory and either miss sales toward the end of the season, which is bad, or you might be leaving margin on the table.

You have two variables to play with when you're selling through your inventory faster than you anticipate, and you're not in a position to replenish. Your big lever is price. Obviously, you reduce your price, you eat into your margins partially, but you generally increase conversion rates, and so you can start to move more units. Doing so in small increments as you go and watching the effect it has on your depletion date relative to your end of season target, you'll get a sense of exactly how sensitive that item's pricing is within your competitive landscape, for example.

The other lever is your marketing spend. A lot of times, people will tend to go to this first, but it's not nearly as influential as changes to price, but it's still worth considering. That is because you have the ability to spend more as a percentage of revenue in order to try and capture more sales. Those sales, if they're not discounted, might be higher margin, but don't forget, you're still paying for those ads. Dollar for dollar, I tend to think that price changes are more influential. There's diminishing returns on both variables, and so there's always going to be just the right balance. I just recommend starting by doing some competitive price analysis. If you're already in a beautiful place for price, then start pushing more heavily into more aggressive ad spend.

If you are a financially-minded person, you might have realized a bit of a logical flaw in all of this. As inventory sits in the warehouse longer, it accumulates more total cost. Depending on how you're accounting for your overhead, you may actually perceive the COGS, your cost of goods sold, as going up over time in your warehousing system, for example. This is a reality. It's one way to keep track of the growth of that overhead cost. What it does is it produces tightening margins. With tighter margins, you are either less likely to want to discount the product, or you're less likely to want to increase your marketing spend, which cuts off your access to the main two levers that you have to affect your volume.

For most companies, what'll happen is as those items become stale, especially as they start to have an average shelf life or an individual shelf life of more than a year or something like that, you'll see people throw things onto clearance. You go from maybe it's full price, and then it goes to a deep discount, and you try and blow it out, turn it into liquid assets that you can then reinvest into other inventory. Those motives are all perfectly sound, but going straight from some sort of mainline pricing down to a clearance level and trying to blow it out quickly is not as margin-advantageous as an alternative I'd like to propose.

You could do a little slight of hand that makes your marketing a lot more effective. Instead of increasing your perceived COGS over time, reduce it. Obviously, this is not an accounting strategy. This is entirely a marketing strategy. Our ability to change prices improves because we have more margin to play with. It also means that you might be able to allocate more marketing dollars to it. If your marketing dollars come out of a percent of your margins, which is a perfectly reasonable way to do it, then as the margins increase, so does your budget. Either way, you have access to both of your levers, and those levers become more powerful over time.

A common way to do this is to leave the COGS alone for the first six months or a year. Wherever you see a cutoff where you say, "This is now stale inventory," is a great place to start this accumulation of additional perceived margin. The result is that you have instead of a drop-off to clearance, you have this gradual reduction in your price, or your gradual increase in marketing cost, or a combination of both. You're capturing as much margin as you can at each step along the way. If you just drop a price all the way down, you're giving up all of the area under the curve that might have been captured by having a higher than your clearance price for a little while and then slowly work your way down until your depletion date is exactly where you want it.

While you might want to let it just continue all the way down to zero because at that point you just need to move the item, you can also do the other thing where you have it go down to some sort of floor, say your break even point, so that no matter what happens, you never discount an item to the point where it turns into red ink. That's it. These are basically the steps that you can take on a per item basis when you're running a relatively small or medium-sized shop and maximize your margin, and therefore maximize the impact of the money that you're investing into your inventory in your catalog.

Bronto Summit 2017: Let Your Data Be Your Guide

2017 marked my fifth presentation at Bronto Summit, and this year, I had all kinds of interesting new data to share.  Hopefully, I'm able to challenge some assumptions, and help you get a clearer vision of your own shoppers!

Full Transcript:

Roy Steves:

[00:00:30]

Thanks for coming, everybody. This is actually my fifth year presenting at Bronto Summit, and this year is going to be a little bit different. I need to sort of give you some background context so that you can understand what I mean by that. If you run across anything in this presentation that you can prove me wrong on, I'll buy you a drink, so that's the addition to the normal Oracle legalese. The presentation was difficult to title. I appreciate you taking a risk on it, but I'm a little bit all over the map with what I was putting together, so I think you'll see why that was a challenge. Back story, in 2010 I started working with Pool Supply World. I was an engineer. They were doing quite a bit of business but it was mostly organic and comparison shopping engines, and so I told them, I'm like, "I'm pretty sure AdWords is just applied algebra. Why don't you give me a shot at that?"

[00:01:00]

[00:01:30]

We added 30% to the business in the first year, and then I'd teach myself another marketing channel, find somebody better at it than me, which usually wasn't very hard, and then hire them and add them to the team, so over the course of the first three years I went from an engineer to CMO. That was quite the roller coaster. Then in 2013 was my first Bronto presentation, and I did a talk on multi-touch attribution because I was collecting all of this browsing behavior directly off of my site into a database and I was showing off what I could do with that. 2014, I did data visualization and storytelling, so taking all this data that was in my warehouse and trying to make it actionable and useful, something that the rest of my team could understand and use. Then the next year was after we had been acquired by Leslie's, so in 2015 I had access to more data than God.

[00:02:00]

[00:02:30]

I mean, it was just ridiculous. I had billions of rows of data in a bunch of different slices, and so I presented on some segmentation strategies that I'd been using with that larger brand. For those of you that don't live in the 35 states they're in, they have over 900 stores, and do just a colossal amount of business, the only national brick and mortar chain in pool supplies, so I had huge amount of data to play with, which was great. Then, I left that in September of 2015 to start my own thing. Then I was thinking last year, presentation-wise, I'm no longer a retailer. What can I tell retailers that they're going to find interesting? What I did was I took basically the greatest hits of the first two chapters of my career, and did a presentation on A/B testing and site optimization, so some of you I know were there, and I appreciate that.

[00:03:00]

The big difference was that I had an incredible depth of data, so obviously I had platform data like Google Analytics and Pronto and AdWords and everything like that, but then I also had a ton of data layers ... Oh, yes, SkyNet was our order management system, so it had a ton of data. Grinder was the session tracker and things like that, so I had just an absolutely colossal amount of data to work with. The depth of data was unsurpassed. Now I'm out here on my own doing something else and I don't have that depth of data, but now what I have is access to an incredible breadth of data. The things I'll be sharing from here are an aggregate from a set of sites that total almost a billion dollars in annual revenue. There are people with more data than that, but it's enough that I have quite a bit of fun, and we can find some interesting things out from sifting through it, so what I traded for depth, I now make up in breadth.

[00:03:30]

[00:04:00]

What we're going to do is we're going to take a look at some of these types of things. When I talk about best practices and industry standard kind of common knowledge type things, I'm talking about stuff like this. StatCounter's not a bad source of information, but there's a lot of information that you don't have in terms of how you need to be able to interpret this. From a statistics standpoint, you don't know what the sample size is, you don't know what the error bars should be, you don't want the standard deviation is. There's just a lot of missing pieces, and that ends up being sort of like just sort of feeling around the outside of the data. I mean, it's analytical phrenology basically. You're not really getting down into the guts of it. In order to understand the data, we're going to have to peel it back, so instead of phrenology we're going to go with X-rays and MRI style.

[00:04:30]

[00:05:00]

We're going to look at aggregate data kind of like that StatCounter one, but then we're going to peel it back and I'm going to show you how different anonymized sites make up those totals, so you can get an idea of whether or not you should take anything you see on the Internet seriously when it comes to e-commerce. We're going to be looking at a bunch of dimensions. These are just some examples. Sometimes we'll use more than one, just so you can get a flavor for what we're getting into. The first thing I'm going to be doing, and most of this is from Google Analytics with a little bit of flavor text pulled in from AdWords data as well, but we're going to start with the channels and we're going to focus on three channels that I selected both for their size, the audience, and then how much influence you can have over those. We'll start with organic PPC and email, obviously, and this is in Google Analytics under conversions, multi-channel funnels, and overviews.

[00:05:30]

This Venn Diagram does a pretty good job of giving you a rough idea of the overlap of your channels. Overlap is a different conversation, but we're just looking at the relevant sizes. The reason that I wanted to do that was when you are a retailer you're in your own echo chamber, and that's all you hear for most of the year is how you're doing, and you don't really know what is normal. You don't know if you're weak or strong in any given channel. That's part of the reason we all love events like this, is because it's one of the few opportunities where we get to compare notes with people. Well, imagine this as being able to compare notes to a couple of dozen people all at once. We're going to start with typical. This is fairly normal. This is a relatively large site, it's fairly mature, and it has organic is the largest, followed by paid, followed by email. I'm not saying that this is necessarily virtuous. I'm just saying it's normal.

[00:06:00]

There are other variations on this. This one is also very typical. This one is actually a little bit different. You'll notice email and paid are the same size. Paid is a little bit smaller relative to organic, so they might be under-investing in paid relative to the size of the organic audience, but that might be a function of operating on a narrow margin, for example where the organic ends up being more critical to them because they can't afford to participate in PPC in the same way, and then email relative to paid is doing pretty well, and relative to organic is fairly typical. You'll see this pattern again and again, so these are three examples of fairly normal looking channel contributions. What does it look like when it's not normal?

[00:06:30]

[00:07:00]

This is the first example. It looks similar to the ones in the previous slide with one notable difference. Paid and organic have switched spots. This is actually a bit more expensive. This happens sometimes when a site does a re-platforming or something like that, or has been penalized by Google and is otherwise trying to fill in a gap using paid. It's a perfectly reasonable strategy but it does take some monitoring of your overall profitability to make it work. That's a little bit concerning even for somebody who's in the paid search space now. That makes me a little bit uncomfortable. Then there's this one. Not only is paid much smaller than organic, which is a problem, email is practically non-existent, and anyone here can recognize how that doesn't quite make sense. In this case, this is very likely a one-person shop. They're responsible for all of the channels and they've resorted to just batch and blast emails out of necessity, right? Some of us have been there before.

[00:07:30]

[00:08:00]

That's something that would be at least an opportunity in email, if not an opportunity in PPC as well, but that's not even the worst we've seen. This is pretty ugly. Now, there's two possibilities here, and these are ... I made them smaller. They're smaller sites, so not to scale, but you get an idea. This is one of two possibly catastrophic situations. Either they're not doing email at all, which is disastrously bad, or they're not tracking email, which is also disastrously bad. Nothing about that makes me happy with how they're doing on email. It's not the only one. I've seen that again and again, and then this is actually the worst one yet. This one makes me cringe down to my bones because not only are they not doing or tracking email, then paid is also larger than organic. That is a very expensive way to scale a business.

[00:08:30]

[00:09:00]

If we line up all of the different channel types with direct on the far left for those at the back, all the way down to social on the right, this is the kind of data that I could have generated if I were trying to do the same kind of sort of white paper nonsense that you see out on the Internet everywhere. It's very easy to produce these kinds of charts, but they're not that insightful. The reason they're not that insightful is if we then kick on the X-ray machine, we see that there's a huge amount of variety of distribution between the sites. Each of these sub-bars is then a different site that's adding up to that total, and as you can see the blue, the largest site at the bottom, makes up almost all of that display. If you had been on this slide and you saw that your display channel is smaller relatively to this chart, then you'd overestimate the value of that channel if you weren't that blue site. I'm going to refactor this data a little bit. We're going to show you another way of visualizing the same thing.

[00:09:30]

[00:10:00]

Now, each of the channels are broken into three types of sites: brands, which sell something with their own name on the label, so they are manufacturing and direct to consumer sales. There are retailers who sell things manufactured by other people, and then there's a few sites in my sample that do enough of both that they defied that categorization, but each of these where you add up the three columns responsible for those site types would be 100%, so that gives you a relative revenue share for each of those channels by that site type. The reason I was doing that is I wanted to validate some assumptions about how brands versus retailers work in this space, and so just so you know how you're reading this channel, remember when I pointed out display is almost all blue? Well, that blue site over there is a brand site over here, and you can see that it then dominates the relative charts, so that's how you interpret this.

[00:10:30]

The first thing I wanted to look at was the brands. I thought that brands would have a more dominant direct than organic because if you know Levis for example, then you seem to me to be more likely to type in Levis in Google or go to levis.com, but if you look at it, the retailers are actually holding their own on direct and actually have a little bit of a lead on organic, so I was completely wrong. Then that actually ends up flowing into the paid section as well, because I assumed that retailers were going to be trying to make up for that brand identification gap by investing more in PPC. I'm wrong. The retailers have a lower average share than the direct consumer brands in my sample. That was interesting to me. The next thing I looked at was email, and this did follow that a pattern a little bit.

[00:11:00]

[00:11:30]

If you know the brand that's selling you the product directly and it's a brand that that name is on the item that you have, that physical object, then I figured you're going to have more loyalty because you're going to be more familiar with the brand of the manufacturer than the brand of whoever sold it to you. If it happens to be the same brand, then you're naturally more likely to repeat purchase from them. Retailers on the other hand, don't have that advantage, so when I was with Pool Supply World, we'd send you a pump. It said Hayward on it. It didn't say Pool Supply World on it, and we would have to make up that gap in customer loyalty through the email program. You can see that we weren't the only ones using that pattern by how the retail section was dominant in email. Then the other three are much, much smaller samples, and pretty noisy, although it's interesting with the amount of bluster in the social space, that nobody in this sample was really generating a ton of revenue off of that.

[00:12:00]

[00:12:30]

The summary is basically if your Venn Diagram when you go into Google Analytics, doesn't look vaguely like this, either you're handling a special use case and that might be, or it might mean you have an opportunity. I'll be sharing all the slides to you so you can use these benchmarks however you like, and then hit me up with any questions thereafter, but if it doesn't look like this maybe it should. Then Google Analytics obviously gives us a ton of data on device types and there's this fetish for mobile first, mobile optimization, mobile site speed, and all those kind of factors, and it has been around since 2008, and it's not wrong, but some of the ways that we've been brainwashed into thinking about mobile didn't quite line up, so I'll use my own assumptions as examples in this. We're going back to the brands versus retailers first, and my assumption was that brands would have a stronger performance on mobile because that trust is there.

[00:13:00]

The product is the brand, then there's not a barrier to get over to say, "I need a Hayward pump, but I don't know about this Pool Supply World company." If it's a Hayward pump and you're buying it from hayward.com, then that trust is already sort of implicit if you're already going to buy the product. To sort of go through the rest of these slides, I do want to point out that I'll be using this metric quite a lot, value procession. The reason I'm using this is it takes into account both average order value and conversion rate, and both of those will vary depending on which segment we're looking at, so this ... You may not sit in front of this every day, but it's a very powerful metric and in the AdWords space, obviously value per click is sort of an analog somewhat. This is where we'll start and then I'll break out differences in AOV and conversion rate where appropriate.

[00:13:30]

[00:14:00]

I'll be doing a lot of scatter plots like this. What I'm plotting here is mobile value procession on the vertical axis and desktop value per session on the horizontal axis, and then I've got brand and retailers and both split out once again. The size of the dot is roughly descriptive of the size of the site, so you can get an idea of which ones are big, which ones are small, and for most of these ... The small ones are also younger sites. They're not one million dollar a year that had been one million dollar a year for 10 years, at least in my sample, so those might behave differently than this group, but you can see there's not really a correlation. It's just sort of all over the map, which I didn't really get. What I did was I broke this down into conversion rate. If I can't make sense of the conglomerate, then I'm going to go down to the atomic level and this is what I saw.

[00:14:30]

[00:15:00]

This obviously does tend to have some sort of pattern to it. You can see it goes along this diagonal, which is three to one, so a lot of these sites down here in the lower corner are converting on mobile at about a third to about a half of whatever they are on desktop, so this is sort of where I'm getting these metrics is saying that if your mobile conversion rate is a tenth that of desktop, then you're out of the band of normal behavior and that might be a problem, but there are some exceptions. This guy way up here, that mobile conversion rate, and this is over 12 months of data, is higher than the desktop conversion rate. It's not a huge site, but that is a very unusual behavior indeed. Then obviously these two down here have a very rudimentary mobile experience and a miserable mobile checkout, and you can see that reflected there.

[00:15:30]

[00:16:00]

This is another way to visualize this. I broke this one up because I wanted to get a look at tablets, but obviously three dimensions in a two-dimensional graph are kind of tricky, so I couldn't quite do it in that last graph, but what I've done here is I've just flat-out plotted revenue versus sessions. The reason I've done that is that depending on where you're getting your data, tablets will get lumped in with desktop sometimes because they're a large form factor screen. The browsing behavior is very similar to desktop in a lot of ways, but they also get lumped in with mobile a lot because it's a touch screen and the conversion rates are lower than desktop. Why is it that there's no consistency in our industry about what to do with this tablet segment? It's smaller than the other two, sure, but it seems to vary day by day whether it's treated as a mobile device or a desktop equivalent, and that's because those yellow dots literally are just sort of mashed in between. The reason that it's not consistent is because it ends up being muddled right in there.

[00:16:30]

[00:17:00]

It's a complicated topic, so we're going to dive into some more tablet stuff, but here's an example of what I'm talking about. This is a mobile retail commerce sales, percentage of retail, et cetera, and this is not telling us whether tablets are included, so with or without that segment that's relatively high converting compared to mobile phones, I can't really use this data. This is that phrenology problem all over again. Also, while it's difficult to read at this size, 2016 through 2020, all our asterisks, I think this is a single year's worth of data in 2015 that they just extrapolated with a suspiciously linear curve, so there's a few reasons I don't really buy this chart. What we can do is find other examples like that. I already showed you this once. This one at least did us the favor of telling us that it lumped mobile and tablet together. Now, if you came across this as I did, and were looking for this to inform how important your mobile site optimization projects might be, you'd be forgiven for thinking that it's about as important as your desktop optimization projects.

[00:17:30]

However, there's a couple of reasons that that would be an incorrect interpretation of this chart. One, this is global data, so outside of the US market there is a higher preponderance of using mobile as a primary device, so that exaggerates the effect if you're a domestic company. If you're global, that's fine. The other thing is I don't know where traffic is on your PNL, but it's not on mine, so you can't buy a beer with traffic. Therefore it doesn't actually count for anything, and this is guilty of that. It's looking at traffic only. The other thing that we're going to do is this kind of breakdown, and this is also from StatCounter. The odd thing is that it doesn't add up to the same thing as what the other chart was showing us. There's a difference in their methodology and they don't really tell you what the difference is.

[00:18:00]

[00:18:30]

This is once again global. It's still traffic share, but once again, mobile looks like it's about as important as desktop and then tablet is almost nothing. Now, global data, just like I said, it emphasizes mobile, it deemphasizes tablet, and that's why you see such a small slice here. If I take my data and try and reproduce this chart, what does it look like? It actually looks pretty similar. This is still on traffic, but you can see the tablets are obviously a much larger slice and then the relative performance of desktop and mobile is still there, but this is still the phrenology version. Let's kick on the X-ray. I've put each of the contributing sites that fit this type of research in here as a horizontal bar, but if you just sort of blur your eyes a little bit, you can see that that's about the same kind of average spread, but there's a huge amount of variety within there. In fact, I'd like to highlight a few of these.

[00:19:00]

[00:19:30]

The blue line and then the purple line above it as well, to a different degree, are fairly typical on desktop, but they have a relatively small mobile share of traffic, and a relatively high tablet share of traffic. That's an interesting thing. We'll dive into some possible explanations for that, but the first thing I want to do is once again point out that these ... The larger sites are on the top and the smaller sites are on the bottom. These bottom three are all kind of interesting. The orange one is almost exclusively a mobile site. I almost want to go back and look at that account, and see what are they doing inbound wise that's leading to such a preponderance of mobile users on that site, and then almost no tablet users. There's something almost suspicious about that one, and then the other two, despite being even smaller still, have something that's more typical in terms of performance, so even the smallest sites can have a fairly normal distribution and then sometimes you get these weird outliers.

[00:20:00]

Once again, traffic isn't revenue, so when we kick it over to revenue, all of a sudden mobile doesn't seem nearly as dominant in terms of where we should be putting our attention. The interesting thing is still that that orange one continues to have a huge amount of revenue, but that might not be because they're exceptionally good at mobile. It's just because they don't have that much other traffic to measure. There's different ways to read that same data there, and we're looking at the smallest sites definitely have the most mobile revenue and the most mobile traffic, so that's interesting that in the middle section over here, we have more share to the mobile side. My hypothesis is that because those are smaller sites, and they're newer sites, they're probably running templates that were responsive from the get-go, but there's an alternative interpretation.

[00:20:30]

[00:21:00]

Larger sites with more resources might have a more developed desktop experience, so even that conclusion might not be sound because the majority of the money is still coming from desktop, so even though you see all of these trends that say mobile first, mobile first, mobile first, yes, mobile's growing. Mobile's critical, especially multi-device shoppers and such, but right now don't forget that desktop is still paying the bills, so as you're working on your responsive templates, be sure to spend a good chunk of your time on the full-size view and not just myopically focus on the mobile version. I'm not saying don't focus on mobile, just remember where the money is coming from that keeps the lights on. Another kind of behavior that I've seen across a lot of site owners is an excessive level of attention paid to new to file customers.

[00:21:30]

[00:22:00]

A lot of sites are overly optimistic about their retention rates, and so they think that they need to spend more money where there are more new customers versus returning because in their minds that customer's going to come back anyway. They're more likely to come back, but by no means guaranteed to, so I was always sort of skeptical of this. The other thing that happens is using the new versus returning stats in Google Analytics, as if you were comparing new shoppers to people who have purchased before, and that is actually wildly incorrect. What it's actually telling you is whether or not that person bought in their first session, if you're looking at the new customer segment in Google Analytics, or whether they are a multi-session shopper. That could be multiple ... You know, just two days back to back. They might have placed two orders. They might have placed one order. They might have not placed any orders, so just be sure that when you're looking at this report, you're not thinking in terms of repeat customers. These are repeat visitors.

[00:22:30]

[00:23:00]

But there still is a difference in the behavior of those two segments, so this is fairly typical. It actually is a little bit more biased toward returning than the sites that I personally managed, but most of the time I see a lot of this 50/50 split. Once again, just like we saw with the mobile versus desktop, one of these has a much larger revenue contribution and that's either higher average order value or higher conversion rate coming from those multi-session shoppers. What this tells me is that we as a group are probably not spending enough time thinking about multi-touch attribution, but the bottom line is don't think that the new customers are ultimately more important than the returning customers, just because they're new to file. Realistically, most of those returning customers could just as easily have gone to Amazon this time around even if they've been to your site before. Yes, you need new customers, but the money that's keeping the lights on is mostly coming from multi-session shoppers and returning customers.

[00:23:30]

[00:24:00]

Even that 50/50 split is not really a great rule of thumb. Here are all the sites that I had split up, and you can see, and this is obviously I didn't sort it from largest to smallest because it was way too hard to read, but you can see that there's a lot of variety in there. If you were one of the top several sites or one of the bottom several sites, you'd be forgiven for looking at that 50/50 split and thinking, "Well, that's what I should have too." Well, not necessarily. The other thing is that while I didn't sort this by site size, there was a correlation between sites' age and this, and that's because older sites simply have more returning customers to possibly draw in. Not only that, smaller sites are adding fewer new members to their returning customer segment per day, so there's a couple of factors where larger, older sites end up biased more toward returning than the smaller sites, which tend to bias far toward new.

[00:24:30]

[00:25:00]

The way you can think about it is if you spin up a Shopify Plus site tomorrow, all of your customers will be new visitors, right? This is the same kind of pattern, just taken out over years instead of minutes, but once again, what about revenue? Now, this is far noisier, so even though the bottom ... This is, you know, if it's bottom here, it's bottom here, so these are the same order of sites. If you're the smaller sites, even though you have a ton of new shoppers, most of the revenue's still coming from multi-session and returning shoppers, so that ends up being sort of thrown back in the face of that assumption that new to file is somehow more valuable out of the gates. Now, if you want to do some LTV calculations, that's a different story, but just using the Google Analytics data like this, you can see that maybe that's not the best approach. Maybe there should be some sort of hybrid approach that takes into account the fact that those multi-session and returning shoppers are more likely to convert.

[00:25:30]

[00:26:00]

Like I said, this is tied to the value of multi-touch attribution because they came from somewhere if they're a multi-session shopper, so if they came to my site yesterday and then they came to my site today, I should know where they came from yesterday if I want to get a full picture of the story. There are other ways to approach this data. Here I've plotted the percentage of new visitors of revenue, so if all of your revenue came from new visitors, it would be very high on the chart, versus the percentage of sessions. If you have a lot of new sessions coming to your site, single session shoppers, then it would be far to the right. The brand versus retail split is a little bit interesting here. The big brands tend to be clustered down into one corner, but the smaller dots up and out to the side show that that preponderance of more new traffic, but they also have a higher share of revenue coming from those new, relative to their numbers.

[00:26:30]

[00:27:00]

What I found interesting about this actually is those big dots, especially the gold dot and the big blue dots down in the bottom, you'll notice that the larger, more established sites do have a smaller percentage of new sessions in traffic. That's because they have a lot of returning sessions. They were the ones at the top of that session stack slide, but you'll notice they're not much farther down on the chart, so they're not getting a larger share of their revenue necessarily coming from that larger percentage of returning customers. Oh, I had two of the same slide. No problem. All right, so here's another way to look at this. I'm now plotting returning value procession versus new value procession, and we can see a pattern here. Everything is pretty much above this one to two line, so returning shoppers tend to be worth about twice as much compared to new shoppers.

[00:27:30]

If you look at the big dot in the middle, that's four dollars per session of a returning visitor, versus two dollars per session of that new visitor, so remember when I was showing you the pie chart with new versus returning on revenue? This is why, and this is the distribution of that. Here's another approach. Now we're just looking at the conversion rate itself, and this also tends to have an interesting trend here. We're looking at a two to one here as well, so we know that that two to one pattern is producing that pattern on the value procession, but I want to highlight one of these sites specifically, and that's this one. While the value procession is pretty much on that two to one trend, it's under the average trend on the conversion rate. Let me say that again. The value procession is on trend, but it has a lower than average conversion rate on returning customers.

[00:28:00]

[00:28:30]

If that's the case, if the conversion rate is lower but the value procession is the same, then the AOV must be higher to compensate for that, and if we swap out this other chart, we can see that that's the case. Here is returning average order value versus new average order value, and the new average order value is $150 for that dot, but the returning average order value is only $100. Now this was very interesting to me, and maybe this was just me and my company, we thought that if somebody was going to buy a $1500 pool pump, that maybe they'll buy a $50 bucket of chlorine first to get a sense of whether or not they could trust the site, and then they'll come back and place the large order. We don't see any behavior that suggests that that's the case, because if it would be then you'd see a much higher average order value on the chart on the left relative to the first purchase. The first purchase generally ends up being equal to or higher than the second purchases.

[00:29:00]

[00:29:30]

Bottom line here is yes, new to file customers are important, but multi-session and returning customers are producing two to three times as much revenue per shopper and per session. Once again, think about multi-touch attribution, do a little bit of research if you're not familiar with it. Google Analytics makes it easy to sort of get a sense of it without having to be as technical as you needed to be five years or so ago. That's the new versus returning. The next rant is demographics. This is the most abused and misused thing, type of segmentation I've seen in our entire industry. It's effectively, in my opinion, stereotypes gone rampant, and it is ... I couldn't believe that the data would support it. Now let's see if I'm right. There's a couple of different stereotypes I'm going to go into and attempt to blow up. We'll see what my hit rate is.

[00:30:00]

[00:30:30]

This is the first one. We were definitely guilty of this. Millennials are the ones on their phones more than any, Gen X tend to be on their laptops and desktops usually shopping from work, and then the Boomer segment tends to be using tablets for a variety of reasons that were mostly driven by stereotypes. This is something we can measure. We have enough data. We can use the demographics data and analytics compared to the device types, and see if there's anything to this. Here's traffic, and there's definitely a difference between the three of them. The mobile and desktop have the same scale and then obviously tablet is a smaller sample, so I used a different Y scale there, but relatively speaking, tablet does skew to 55-plus. Desktop, it goes down to the 25-plus, but it's definitely in that sort of 25 to 45-plus range, and then mobile has the largest percentage of traffic coming from 25 to 45.

[00:31:00]

[00:31:30]

The mobile segment ends up being a little broader than we expected, but this is an interesting pattern. What happens when we switch from traffic once again to revenue? It exaggerates, and that's because the oldest segment and the youngest segment potentially are producing the least revenue per share per device type, so the possibility here is this might not be a behavioral pattern so much as an economic one. It might not be because they're old or young that they're producing a difference in the revenue per device type. It might just be an economic one. If we then go back to the sites, so we start to see the variety here. This is ... Each of the colors represents one of my age segments, and then I have younger to the far left and older to the far right, and then now I have switched back to having the sites stacked from largest site to smallest site.

[00:32:00]

[00:32:30]

This is noisy. There's not really much of a pattern here. There are a few sites in here that are particularly interesting. The top one of these two has 75-plus percent of its revenue coming from users 55 and older. That's unusual, and it's not a small site. At the bottom, which is smaller by traffic but not necessarily by revenue, has almost 75% of their revenue coming from ... What's that? 35 and under. If either of these two sites saw general, broad spectrum, here's national averages, their site wouldn't fit that model at all. They wouldn't be able to use that data and if they did, they'd be led astray by it, but if we switch from revenue to value procession, then the pattern starts to normalize pretty nicely. Even if you have a random set of people in terms of ages coming to your site, most of them are producing about the same value procession.

[00:33:00]

[00:33:30]

There's a difference between quality and quantity in your shoppers, and the standard assumption is that if 75% of my shoppers are from this demographic, then I should focus on that, but it might be the case that all of the shoppers are roughly the same value. I wanted to point out one of these. This one does have a higher value targeting the 25 to 35 range, so they're the exception to my rule, but as you can see, this is way more predictable than it was with the by revenue, obviously. The VPS is more consistent. The next one is ... That was true by volume but not by quality, so I'll give myself half credit for that one. This one's been a part of online shopping since Amazon first came on board, gender stereotypes, and I intentionally chose the most deplorably stereotypical female shopper image. I mean, she's got a couch. This poor woman's laying on the ground with a credit card with no numbers. She's obviously disturbed and fictional.

[00:34:00]

My assertion is that this person doesn't exist, and that we all need to stop thinking in terms of this shopper that's been crammed down our throats for the past decade. Am I right? Well, here's the revenue share by gender. I'm going to skip traffic entirely and just go to the brass tacks here, and as you can see, there is a wide variety of gender distribution on this, so if I'm selling product X and I'm assuming that my shoppers are all female, not only am I potentially wildly incorrect, what am I going to do with that? Am I going to be one of those schmucks that just makes it pink and calls it targeted for females? It's just ... It drives me crazy. The next thing we do is check out the value procession, and this is very interesting for a couple of reasons. One, the value procession is not only fairly evenly distributed with that one exception of the site at the bottom, second from the bottom, the correlation is actually inverse.

[00:34:30]

[00:35:00]

What you can learn from this is if for example I am on a skin care site, I might be the smaller demographic on it, but I might be really into skin care, and if a woman is on a after-market Harley parts site, she might convert more highly because she's really into after-market Harley parts. Once again, quality and quantity are not the same, and I hear too many instances where people try to use one to explain the other, and it's just not the case. Differences in volume are not the same as differences in value. Then I gave you some examples that I see all the time in AdWords where people are like, "Well, there's more business during the hours of 10:00 to 2:00, so I'm going to increase my bids." That's not reasonable. It's not what that tool is for. If this all sounds a little vague, here's an analogy. I'd rather have one good craft beer than an entire case of Natty Ice, and that's what we're talking about here. The volume and the quality are completely different variables.

[00:35:30]

[00:36:00]

The next thing I'm going to go after is page speed. Especially for the past three to five years, this has become dogmatic, like you have to work on making your site faster. It seems more reasonable. This one, I was going, "This is probably correct," right? Google's even using it as a ranking factor. That alone means that you should probably put some weight into it, but let's see how it correlates with other metrics. You'll see this summarized in a bunch of blog articles and infographics and things, and I love Kissmetrics blog. They're excellent. They're one of the best out there in my opinion, but they're guilty of this kind of nonsense, where what falls in the graph of what users thought. What people say and what people do are not at all connected. If you look at any of the research done on the correlation between survey responses and actual behavior, this is not going to tell you anything useful. Love Kissmetrics though I do, that's not really great.

[00:36:30]

[00:37:00]

Here's the average page load versus conversion rate for the sample of sites that I was working from. There is a correlation, in fact the R-squared is 0.41, which in very rough terms means that I can explain about 40% of the differences in conversion rate across the sites using page speed alone. That's a massively powerful correlation, so you've got the slow and low converting sites on the left, and a fast and very high converting site on the right. Page speed confirmed, does matter, but there's a sub-assumption here that people have been harping, especially in the past 12 months to two years maybe, that this is even more true for mobile. It's just not. It's just not. There's no correlation here. It only explains 7% of the variation in conversion rate when I narrow it just to the mobile segment. What's going on here might be that the expectation is lower. People know that the phone experience is slower, so they're more patient. I'm not saying they'll be patient tomorrow, but today they still are.

[00:37:30]

[00:38:00]

You should definitely work on your page speed. You should definitely work on mobile, because it is a growing segment, but just because you see it on the Internet doesn't make it true. That concludes my sort of survey of all of this data. It's sort of stream of consciousness because that's how I do analysis, but I find these little gems along the way. Check your relative channel contributions. If your Venn Diagram doesn't look vaguely like the standard ones I showed you, understand why. It might not be a bad thing, but you should think about it and understand why it's different. New visitors are important, but multi-session shoppers pay the bills. Regardless of age, device, or gender, quantity does not equate quality, so volume and value are not necessarily correlated, and mobile shoppers are more patient than you think, caveat, for now. Remember, when you're reading things, including on my beloved Kissmetrics blog, that there are three kinds of lies: lies, damned lies, and statistics. Thank you.

What Is a Good Feed Structure in Google Merchant Center?

Hey Roy, love your posts and had a question about Google Merchant Center feed structures.  What makes a good feed structure?  Is there a preferred feed structure that you think is best that provides the most useful information for success in AdWords?

There are three major components that really affect how your feed performs, structurally (that is, aside from the content of the titles and descriptions themselves which I'll cover in a future blog post).

First, where appropriate, use product groups, with parent and child products, where you have variants of a single product (by size, color, etc.). This expands the ways that Google can match your products against specific semantic searches ("XL gray waterproof jacket").

Second, the Product Type field is totally free to use however you'd need for specific attributes of your catalog.  One of the common and more effective ways that I've seen the Product Type field is to emulate the merchandising decisions that went into your site's navigation.

Third, Custom Labels are a huge advantage. Price tiers are the first thing I use them for, but they're also handy for color, size, gender information, where applicable, as those dimensions can often uncover a level of distinct behavior above the individual SKU level.

The advantage to using the product group is that it increases your relevance for searches for the child products--such as "size 9 boating shoes" or "gray sailing vest". Further, it gives you the opportunity to split those variants out within the bidding taxonomy (provided you also pipe that data into custom_labels, or additional values in the product_type chain), which can be helpful if some versions carry a different conversion rate.

For example, a client who sells t-shirts split out both color and size, as some colors are vastly more popular, and sometimes the less common sizes will convert better (say, if you're trying to find an XS or XXXL), given the relatively weak competition for those segments.

Thanks for the question.  If you have any additional questions about feed structures, Google Merchant Center, AdWords or really anything e-commerce feel free to drop me a line.

Up and to the right,

Roy