This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our cookie policy. You can learn more here. Continue

The inherent myth of programmatic buying & creative automation

Acceleration General News

Many companies now jump into programmatic, and on many levels it may make sense, but it is not a simple as just doing that! Savings may in fact be costs. The algorithm argument is but one of many that are important for clients to consider.

Statistical techniques, for all their amazing properties, are still just automated systems that apply a given set of input variables to arrive at a conclusion. If the variables are not present, they are simply not considered. It comes down to the age-old adage: a computer’s output is only as good as the input it is given. The same applies to any decision logic – you can only consider the facts before you.

Algorithms are naturally biased: volume dominates

A system exists in a contextual universe that gives it meaning. The context cannot be specific unless we “design” for it by expanding the universe to a wider group of people with greater diversity. We must specifically design for desired outcomes.

The same principle applies to consumer research. We can only obtain answers to the questions we ask. Hence, when analysis only explains a small percentage of brand purchase drivers, it may not be because there are indeed so few; it is more likely that we did not ask all the necessary questions. Volume has an impact upon analysis.

So what happens most, dominates processes and outcomes. What happens less, are relegated to lesser status. What this means, is that the number of times I check out an item in search or on sites, will be more important than the value of the item, unless I design the algorithm that way.

Large and small brands need different algorithms

Why is volume so important? If we look at a brand like Amazon, we see that the main need may be to over-design to reach outlier segments of the market. It is a volume game. Using behavioural data to provide predictive variables will only matter when volumes start declining. As long as short-term purchase data is good enough to inform a high enough number of future sales, this is sufficient.

However, when a brand is smaller and needs to fight for its place in the sun by being different or by targeting niche segments, simply relying on generic sets of variables is not sufficient. Then the algorithm needs to be adapted. The same applies to SEO – dominant brands naturally dominate, so smaller brands need to be clever about the words they use to “manipulate” desired behaviour.

The “design” of algorithms

To be properly representative, we must capture a wide enough range of algorithms to enable the variance we need to segment people into smaller groups. The result of all these algorithms ultimately needs to enable us to talk to individual consumers in a highly personalised way. Yet, unless the input factors represent the smaller consumer segments, we simply won’t know they are there. We must design for them. Let alone have sufficient variables to explain them: what they want and need. Again, volume mesmerizes the marketing automation process today and it is dangerous as it undermines the real value of marketing tech.

The crucial question is: should brands try to be unique and creative in their marketing, or does the inevitable drive towards machine learning based on “averaging” algorithms force us to “dumb-down” marketing?

Ten guidelines for marketers to follow
  • Know your product or service category: is it growing, maturing or declining?
  • Is the market saturated? Do we need to look for niche segments to enable wider expansion? If so, how do we expand the algorithm to do this?
  • If growing, differentiation is less important. If not, differentiation is key. We need to know the difference.
  • Is our brand a leader or a challenger? Leading brands can leverage all economies; challengers need to work far harder at being different.
  • Is our brand differentiated? If so, how? Once we understand this, we can build this “bias” into our algorithms.
  • Are results declining over time? If so, why? Can a changed algorithm assist or does the problem lie elsewhere?
  • Can we segment algorithm groups? If so, can we learn more about what separates algorithms that are greater or lesser predictors of sales results?
  • Can we build “bias” in by following trendsetters or up-weighting data from groups that demonstrate differences?
  • Can we test different options and assess results? The fewer resources we have, the more testing we need.
  • Can we expand diversity? If so, will incrementally deeper and more creative messaging give us better return on investment?

To conclude

The most important issue is to be aware of algorithm bias. As data sources and points grow, this problem will decrease. But in most statistical techniques, the most salient data points often dominate and will continue to do so. We have to design to avoid this.

So start with the status of your brand and what will drive behaviour. Do not let the natural “force of gravity” (volume) dictate strategy.

About The Author Acceleration - General News

Acceleration enables the transformation of marketing organisations. By building new data and technology-enhanced capability Acceleration stewards a step change from marketing which is fragmented, static and product-centric, to marketing that is orchestrated, agile and customer-centric.

Part of Wunderman, Acceleration employs 150 strategic marketing technologists globally.

About Acceleration