Data-driven decision making has been the mantra of most good CEOs and CMOs over the better part of the last decade. They want all marketing decisions to be based on solid data that had previously not been available but is today in high amounts. But, data can be deceiving. It may lead you in one direction, when in fact the right answer may be completely in the opposite direction. Allow me to explain with this marketingcase study from my Restaurant Furniture Plus business.
The marketing strategy when we acquired the company
We acquired Restaurant Furniture Plus in 2018. Up until that point, the founder was largely dependent on advertising in the Google Shopping section, with product listings of all their SKUs. I was curious why they were not advertising in the Google Search section with keywords, and her response was, “We tried for a few months, but the data could not prove it was actually working, so we pulled the plug.” I was hopeful that was an upside opportunity for us…if we could figure it out.
Our marketing strategy soon after we acquired the company
One of the first things we did when we started our own marketing efforts was to build out our list of keywords and begin advertising in the Google Search section (while keeping our Google Shopping campaign live). We thought of all the possible keywords around our products, including chairs, tables, stools, etc., and all variations of those words, including extensions for restaurant, hospitality, wholesale, commercial, food service, etc.
Our initial results were not great
We were perplexed. Our initial results were exactly the same as the founders’ results when she had tested Google Search. The conversion data in Google was telling us it wasn’t working and our agency recommended we shut it off. But, that made no sense to me. I know we had tripled our marketing spend overall, and I could see our revenues rapidly growing with that spend. So, I decided to dig a little deeper into the data.
What we learned from the original data
When I started to “peel back the layers of the onion”, interesting insights were identified. First of all, the overall campaign was not working, but there were pieces that were. For example, generic words like “dining chairs” were not working, because it was largely consumers looking for furniture for their homes, and all the competitive bidders for that space, like Wayfair and Pottery Barn, were talking the advertising costs up to unprofitable levels. But, specific words like “restaurant booths” were doing much better in helping us get to our desired restaurant targets. So, we decided to put all our efforts on those more directly targeted words, and shut off everything else.
Secondly, we uncovered a major attribution problem. Our customers were using multiple devices, starting from a Google search with their mobile phones, but buying from us from their work computers when they got back to the office, where we losing the tracking of where the lead really originated from. So, we immediately turned on Google attribution modeling tools for them to help us learn that our return on ad spend (ROAS) was closer to a profitable 6x, than the unprofitable 2x the original reports were showing, with the proper marketing attributiontracking in place.
And lastly, we were managing our agency to optimize the wrong data metric. We were pushing them to drive an immediate ROAS. The problem with that was the only transactions that happened immediately, were the small ticket online ecommerce orders worth $500 each. Not the big $5,000 offline orders we wanted to be closing, which had a longer 2-3 month sales cycle. We immediately shifted gears and told our agency not to worry about immediate ROAS (we would track that in 3-4 months). Instead, the only data point we care about is driving big-ticket leads into our sales pipeline (that we know won’t close for 2-3 months). In this case, patience for proving ROAS would be a virtue.