Glen: Scott, one of the things that I really enjoyed about this book was the concept of ‘pace layering.’ This is an idea that you referenced from architecture and brought it over into the world of branding. I wonder if you could unpack that a little bit more for us.
Scott: Yes, and it’s a great topic. I’m not sure I’ll be able to do justice to it in a few minutes, but all right. First, just a level set for those listening in: this idea of ‘pace layers’ was originally introduced by Stewart Brand with regard to architecture and buildings. The idea that buildings, which we think of these very permanent things, actually evolve over time, and what Brand really noticed was that buildings have different layers—this idea that the foundation is one layer, the interior services such as, electrical and plumbing is another layer, interior walls is another, furniture and such.
So, it has all these different layers and they are all changing over time, but what’s really interesting is that those different layers change at different rates, so things like the furniture or even the interior walls change much more rapidly than, hopefully, the foundation of your building and the site on which it’s built. This idea of pace layering has had some influence in architecture, but over the past 15 years, it has also been adopted for things like IT.
Gartner adapted it for the notion of pace layering IT systems, and what I talk about in the book is adapting it to marketing. I think you’re right in characterizing the way I describe it in the book is more of an operational framework for marketing—that we have things we structure, for instance, the company and the brand, then we start to get into campaigns. Particular channel execution for campaigns, then tactics, and then really fast moving stuff like A/B testing iterations or real-time feedback from social media.
So, looking at that framework of how marketing changes, these different layers are changing at different rates, the idea with pace layering is to say, “Okay, given that change is going to happen, can we design the layers that change more slowly to have a certain pre-designed open- ness, a pre-designed malleability for the layers above them that are changing more quickly?” To segue into addressing your question more broadly, “How do we think about this in the framework of brands, balancing that trade-off between consistency and adaptability?”
I think it’s right to say that the brand itself, the position that a brand has in people’s minds, in my opinion, is probably the thing that changes slowest. Once that position has been established, it’s an Al Ries–Jack Trout classic positioning of ‘first-in-the-mind.’ Then, on top of that, I think it’s very interesting to talk about the different contexts in which the brand has a relationship with the audience. And, while you might want certain inconsistencies in how you consume a particular brand—I want my Froot Loops to always taste like Froot Loops; I want to be able to recognize the box as I go through the cereal aisle—there might be other touchpoints in social media, in fun Internet campaigns or apps, where we have a bit more leeway because the expectation of the consumer for that brand hasn’t yet been established.
* * * *
Glen: Scott, one of the things your book talks a lot about is the idea of experimentation. And it makes me think about a contrast between two different styles of branding that we’re seeing going on. On the one hand, there’s the traditional approach of staying in pretty tight swim lanes in terms of what a brand can take on. And, on the other hand, there are organizations that seem to be able to maneuver and jump across industries, even business models, and their brand seems to still stick and be successful across those.
Some examples might be Google with its daily doodle as it relates to their brand representation, or Virgin as you think about the way Virgin can move from one industry to the next. I wonder if you could talk a little about this idea of the relationship of brands to expectations and your idea of experimentation.
Scott: Yes, I think a lot of this comes down to expectations. If you have expectations around a particular aspect of your brand, then that’s really what people are looking to for consistency. For instance, Google’s doodle of the day, I agree, it’s fascinating that they’ve developed what is essentially, an expectation now. People expect a doodle to change for each particular holiday or what happened in history on this day.
For other brands, that may not be the case. Their logo might be sacrosanct because that is the expectation that those consumers have. This is all new territory for us, but I think perhaps the most exciting spot for experimentation for brands of all kinds is, “Okay well what about these new touchpoints that are entering into our lives, where expectations haven’t yet been set?”
* * * *
It’s here that Glen turned the conversation to ‘local maximums,’ and Scott Brinker’s case for broader thinking in marketing optimization.
* * * *
Glen: There’s another topic which I found fascinating when I heard you speak and again when I read it in your book, and that’s the idea of ‘local maximums.’ I think we have both seen in organizations where data and analysis really rule, that notwithstanding all the effort to make use of all that data, the progress that’s achieved on the basis of all that analysis can be pretty limited. So, I wonder if you can talk a little bit about your idea of local maximums.
Scott: Sure, this is definitely one of the nerdier topics in the book—there is a lot of competition for that label! The idea of optimization, the inspiration for this was computer algorithms—when given a particular problem that you want a computer to solve, how does it optimize the solution?
A very naive approach is to do stepwise optimization—you pick a value, and then you try a little larger value and see if things improve, and if they do—great. Then you try another larger value, and you keep doing that until it stops improving. Then you might walk back the other way and say, “Okay, let’s make it smaller, does that make it better?”
What’s interesting in computer optimization is it’s well known that the algorithm isn’t very good. Out of all the possibilities in the world, if you find yourself trying to go step by step, if you’re at a local maximum, and you’re at a point that isn’t very good or it isn’t the best possible position to start from, then to get to a higher, more successful outcome, you would have to somehow go through a valley, and your algorithm will never get you there. The moment it starts to see things degrading, it stops.
So, the solution to this in computer algorithms is actually to jump around the landscape—not just try these stepwise alterations—but to leap from several spots across the entire range of possibilities to see which of those might be the best place to then try and stepwise improve. It applies very well to marketing optimization rate, conversion rate optimization and A/B testing where we see a lot of marketing optimization programs suffer from this exact problem.
For instance, with a landing page that had a certain conversion rate, we will try a different headline, or we’ll try a slightly different image, or a different button color for the call to action. In the scheme of marketing, these are very small changes for stepwise optimization, and inevitably, what happens is that they hit a wall. After a few rounds of testing, you just find yourself saying, “Well, I guess this is as good as we’re going to do with this headline or this button.” But, this is the problem of a local maximum.
What the campaign has failed to do was to experiment with some very different ideas of how to engage that visitor. Maybe it shouldn’t just be a landing page, maybe it should be a piece of interactive content, maybe it should be something that connects people into a multi-step experience. Even if it’s a landing page, considering significantly different offers or significantly different value propositions, the way you’d even characterize the entire page to your audience. That’s essentially what I’m recommending in the book. For marketers to think of these A/B test programs before they even get to that optimization phase, to take the opportunity to try some significantly different concepts to see which might be the best place to start from, then do optimization from there.
* * * *
Speaking of broader thinking in marketing, Glen asked about Scott Brinker’s call to answer ‘big data’ with ‘big testing.’
* * * *
Glen: Scott, you use the concept of ‘big testing’ to contrast something that’s going on today that might be ‘small testing.’ And, if I read it correctly, you’re suggesting that we’re looking at data in too narrow a way. And it makes me wonder, are we in some way delegating work and responsibility in marketing that’s suppressing that voice that says, “What if?” Or, “I wonder?” A voice that’s been driving good marketing in the past.
Scott: Yes, I think it’s a really interesting and powerful shift in thinking that marketers are on the verge of going to this idea. The reason I call it ‘big testing’ is it’s the idea that testing isn’t just something that the optimization team does off in a silo, “Okay we’ll do some testing for what’s the best landing page here.” Rather, testing starts to become something that is pervasive throughout the culture of the marketing organization. Everyone at every level of the organization starts to frame their ideas around the questions of, “What if? I’ve got a really interesting idea, can we test this?”
The beauty of the digital world and this technology is now it’s incredibly easy compared to the days of having to actually print physical objects, or very high-end expensive TV productions. Basically, the economics of it made it infeasible to do wide-scale testing. Although, we all owe a debt to the database marketers once upon a time—they were the real pioneers of a lot of this idea doing A/B testing. But in the digital world today, we can do that on a scale that’s unprecedented and it’s cheaper and easier than ever before.
Where the bottleneck is, is the culture of most marketing organizations is not yet caught up to that possibility. There is still generally a reluctance to approach things throughout marketing as a constant state of experimentation—a constant state of coming up with new hypotheses, testing hypotheses, and re-testing hypotheses over time. That’s the real power I think of being a data driven organization in today’s world.
* * * *
Concluding their chat, Glen and Scott discussed the role of an underused player in digital marketing—the right brain.
* * * *
Glen: Quite some time ago, Marshall McLuhan said, “We make our tools, and our tools make us,” and I’m wondering if there’s a degree to which the MarTech stack and the analytical tools we’ve got at our disposal today are shaping us, and not always in ways that are most constructive. I’m wondering if the cause-effect relationships that we need to theorize from first principles in order to get marketing better are implied and assumed in the structures that are handed to us in the software we’ve got. And I’m wondering if there’s a frontier here for pushing back from the pressure that these technologies provide us to become stronger as theorists with the benefit of all the data that we’ve got available to us.
Scott: Yes, it’s a really great point. Now we’re starting to get into the territory that doesn’t get talked about much in these experiments in marketing, they’re not like experiments in a controlled chemistry lab. There are essentially an infinite number of variables with every test that are outside of our control and they’re constantly changing, so any results that we get from any of these experiments, our ability to precisely, mathematically determine cause and effect is basically somewhere between low and non-existent.
I think within science and mathematics, this notion of Bayesian thinking, this idea that inside our heads or inside our organizations, we might not be able to precisely measure the relationship between something but we start to by doing these tests to improve our relative probabilities of making the right choice. It’s not a guaranteed equation but it’s a probabilistic one, and so by running these experiments and matching them with our intuition and our understanding of customers from a more holistic and empathetic way, I think we are able to do a better job of finding the right marketing that resonates with the audience than before, when we just didn’t even have the luxury to run these experiments at all.
I think you’re absolutely right that there’s a limit to how mathematically optimized this can be and I think it’s frankly one of the reasons why, in spite of all the wonderful advances that are coming our way with artificial intelligence and machine learning, there is still for a very long time to come, in my opinion, a powerful need for human judgment in the overall marketing strategy. We don’t have all the data in the world to have a computer algorithm mathematically determine the right strategy for us.
* * * *
Quarry Chief Innovation Officer, Glen Drummond, spoke with Scott Brinker, author of Hacking Marketing: Agile Practices to Make Marketing Smarter, Faster, and More Innovative. At Quarry, this is Fresh Ideas.