Sonal Patel
Nov 20, 2020

How to find your way in digital measurement

Measurement is flawed, fragmented and potentially misleading for marketers. Quantcast's SEA MD lays out advice for weighing the pros and cons of different approaches, reducing bias and making informed decisions.

(Shutterstock)
(Shutterstock)

The rise in adblockers and super apps, coupled with the leapfrogging to mobile-first markets, makes Asia very different when we think about measurement for advertising. Yet we have never really addressed legacy measurement challenges that still exist despite technological advancement.

Measurement is the backbone of addressing the success of an ad campaign, and defining success can only be done through setting a goal and then leveraging the analytics and measurement to compare against that goal.

The key challenge that I’ve found during my time working both agency side and for some of the world’s largest tech players, is that measurement is fragmented. There are so many acronyms that fall into the measurement bucket for marketers to choose from—from CPA to CTR and from ROAS to CPM. And the challenge posed for marketers and media agencies is understanding the best metrics that can be used to define success across the board.

Let’s start with the importance of measurement

There are three primary reasons for measurement: budget justification, budget allocation, and continual improvement. The rationale and audience behind each of these are different and will require different reporting metrics. For example, when asking your CMO for budget allocation you will need metrics that provide information on which channels perform best for your marketing plan, versus continual improvement that will drill deeper into campaign metrics.

The system is flawed

Whilst there are so many metrics to choose from when thinking about measurement, the system is flawed. The holy grail of measurement is ‘scaled utopia’. Utopia here referencing the direct link from the marketing message being seen, to a direct buy from that message and being able to attribute the advertising dollar to this direct conversion in a concise way. Sounds simple, but why is this so hard to measure?

In digital marketing, I boil it down to four areas (three of which define ‘big data’):

  • Velocity of data
  • Volume of data
  • Variety of data
  • Veracity of data

Regarding the last one, IBM will refer to the veracity of data as the "uncertainty" of data. I use the term veracity simply to mean the use of data is so aggressively leveraged it is often misused. This is why measurement is so difficult. We have a deluge of data, and it’s hard to find valuable insights that can directly lead to an outcome. And this has to happen in real time, which has to be managed through the volume, velocity and variety of data.

The challenge with click-based metrics

A key flaw in digital advertising measurement, which we see time and time again across Asia, is the focus on clicks as a success metric. But clickers are not your converters. Consumers are usually online for a purpose—to read content, to shop, to socialise. They’re not necessarily in an environment that they want to click out of. They are open to seeing content within the page they’re browsing, but may not want to leave the page or ‘click out’. Whilst clicks are an easy metric to report on, in reality almost all of the clicks you are generating aren’t making you any money. So why not invest in areas that will drive revenue for your business?

So what approach should you take?

Good measurement involves weighing up the pros and cons of the different approaches, establishing clear methods to reduce bias and making decisions knowing that the measurements you choose to use set the tone and the incentive for every part of your marketing supply chain. We suggest using a cohort of metrics to measure your campaigns:


It’s important when we think about metrics that we think about how our approach can support continual improvement of our campaigns.


In this cycle, your rate of forward progress is directly proportional to the rate that you can iterate. How can you get from an idea to execution to a measurement and then go again—the test and learn approach?

Measurement here needs to tell you two things:

  • Did I execute my plan well?
  • Is it impacting anything?

These are the two things that really matter when it comes to measurement for the purpose of continual improvement.

There are a number of challenges when we look at digital metrics. If you want more information about the pros and cons to these, we recently hosted a webinar on the topic.

When it comes to performance campaign measures of success, we usually see metrics such as clicks, conversions and sales. But, as mentioned above, clickers are not your converters. This is why hygiene metrics are important. They help you manage your risks versus goals. While they show an ad’s viewability, they do not show customer engagement and so cannot be used in isolation.

Below is an overview of a journey that ‘Sarah’ would take that really speaks to the challenges in our industry on conversion and performance metrics:


As you can see, with so many touchpoints within the consumer buying journey it becomes difficult to understand which advertising partner has helped to drive the conversion. There are two main ways that marketers measure success—attribution and incrementality—but it’s important not to conflate the two. While they are both trying to solve the measurement challenge, they are not the same thing.

Attribution

Attribution looks at the touchpoints along the journey that have impacted purchase. It’s correlative rather than causal, because while it tries to assign credit, it cannot explicitly credit any one touchpoint for the sale. It answers the question: “What touchpoints were associated with a consumer conversion?”

Recommendations for how to approach attribution:

  • Apply common sense: What do you expect to happen and do your campaign results align?
  • Aggregate numbers hide reality, so look for the directional signals. Is the tactic working or not? It’s a lot easier to see what is happening when you don’t consolidate to one single number.
  • Think of an ad campaign as a portfolio of ad impressions, not in isolation.

Incrementality

We define incrementality testing as measuring how a specific marketing event was causally influenced by a media channel or tactic, in this case display, over a set time period and budget.

Incrementality challenges:

  • Publisher inventory bias: Ad exchanges and publishers are selective about the inventory they will serve on their sites, and this affects creatives differently.
  • Cookie churn problem: Cookies move from control to treatment (and vice-versa). This could drive lift to zero because it scrambles the causal signal.
  • Poor or gamed benchmarks: Your control will drastically impact your results. Some people use non-viewable impressions as a control, but this adds in a new behaviour that could skew results.
  • Inconsistent attribution: Different attribution models have impacts on lift. Trends could be completely different between different attribution vendors, thus sending out wrong signals to the system. Sometimes this is unintentional, but it also can be intentional, with partners finding ways to game the system to show higher conversion lift.

Incrementality solutions:

  • Adaptive block lists/allow lists to address publisher inventory bias.
  • Experiment on traffic that is trackable to address cookie churn problems.
  • Validate the methodology and execution to address poor or gamed benchmarks—run one consistent study across vendors to set a level playing field and thus allowing you to establish benchmarks to set up for future success.
  • Keep it simple to avoid inconsistent attribution—align your measurement and attribution criteria.

Will we ever find the holy grail of measurement?

Measurement is really hard. Reaching and influencing audiences, cutting through the noise, and coming up with a value proposition that can steer behaviour is incredibly hard. Reducing this to a single metric is like the holy grail.

Every metric you look at, every audience you target, every methodology you use—they all must be evaluated as part of a cohort.

Ensure you weigh the pros and cons of different approaches, establish clear methods to reduce bias and, most importantly, make decisions knowing that the measurement you choose to use sets the tone and the incentive for every part of your marketing supply chain.

These principles will help to consistently learn from the continual feedback loop and evolve your own measurement strategy.


Sonal Patel is managing director for Southeast Asia at Quantcast.

Source:
Campaign Asia

Related Articles

Just Published

23 hours ago

Amazon CEO Andy Jassy on using AI to win over ...

The e-commerce giant’s CEO revealed fresh insights into the company's future plans on all things consumer behaviour, AI, Amazon Ads and Prime Video.

1 day ago

James Hawkins steps down as PHD APAC CEO

Hawkins leaves PHD after close to six years leading the agency, and there will be no immediate replacement for him.

1 day ago

Formula 1 Shanghai: A watershed event for brand ...

With Shanghai native Zhou Guanyu in the race, this could be the kickoff to even more fierce positioning among Chinese brands.

1 day ago

Whalar Group appoints Neil Waller and James Street ...

EXCLUSIVE: The duo will lead six business pillars and attempt to win more creative, not just creator, briefs with the hire of Christoph Becker as chief creative officer.