Sonal Patel
Dec 8, 2021

Finding digital measurement success, part 2: Attribution and incrementality

Too often marketers conflate these two terms. Quantcast's SEA MD disentangles them.

(Shutterstock)
(Shutterstock)

In our first installment in this series ("Finding digital measurement success, part 1: Cohorts vs clicks"), we established that it’s important to use a cohort of metrics to measure success, but there are two additional methods that savvy marketers employ to truly quantify success: attribution and incrementality. While these terms are widely used to solve the measurement challenge, they are often conflated, causing confusion. 

Let’s start by defining what they mean

Attribution and incrementality quantify different things:

Attribution looks at the touch points along the journey that have impacted purchase. It’s correlative rather than causal because while it tries to assign credit, it cannot explicitly give it to any one touch point for the sale. It answers the question: “What touch points were associated with a consumer conversion?”

Incrementality measures the impact of a single variable on an individual user’s behavior. For digital display marketing, it is most commonly used to measure the impact of a branded digital ad (exposed group) against a Public Service Announcement (PSA) ad (control group). The lift is measured as the percent difference between the two. Incrementality demonstrates the value of advertising, helping to answer the question: “Did my ad result in a purchase?”

A deep dive into attribution

While nuanced in its own way, it’s also important to understand the challenges and solutions with attribution. In the example below, there are so many touch points within a consumer’s buying journey today, and that’s where it becomes difficult to understand which advertising partner has helped to drive the final conversion.

This overview of a journey that ‘Sarah’ might take reveals the challenges of conversion and performance metrics:

To mitigate this, the first thing marketers need to do is apply common sense: what do you expect to happen, and do your campaign results align with your expectations?

The next step is to think about measurement in ‘shapes’ rather than individual numbers (e.g. an individual CPA), as these singular figures often hide the reality and complexity of the campaign results. You might find it a lot easier to evaluate the success of tactics when you don’t consolidate results to one single number; think of an ad campaign as a portfolio of ad impressions that aren’t in isolation.

Looking at incrementality

Incrementality testing compares the marketing results between a test group and a control group, which can help advertisers better understand if the KPIs are a direct result of their own campaigns or extraneous effects.

At Quantcast, we define incrementality testing as measuring how a specific marketing event was casually influenced by a media channel or tactic, in this case display, over a set time period and budget.

The challenges here are inventory bias, cookie churn and gamed benchmarks.

  • Publisher inventory bias is caused when ad exchanges and publishers are selective about the inventory they will serve on their sites, which affects the performance of creatives differently.
  • Cookie churn problems stem from cookies moving from control to treatment (and vice-versa), potentially driving lift down to zero because it scrambles the causal signal.
  • Poor or gamed benchmarks occur because your control (or baseline) will drastically impact your results. Some people use non-viewable impressions as a control, but this adds in a new behaviour that could skew results.

To help solve this, we recommend deploying adaptive ‘block’ or ‘allow’ lists to address publisher inventory bias, experimenting on traffic that is trackable to address cookie churn problems, running one consistent study across vendors to set a level playing field with consistent benchmarks, and aligning your measurement and attribution criteria.

Finding digital measurement success with cohorts

Reaching and influencing audiences, cutting through the noise, and coming up with a value proposition that can steer behaviour is incredibly challenging. Reducing this to a single metric is ideal, but likely impossible as measurement continues to change as the approach to digital advertising becomes increasingly multifarious.

As mentioned in "Finding digital measurement success, part 1: Cohorts vs clicks", every metric you look at, every audience you try to reach, every methodology you use, must all be evaluated as part of a cohort, ensuring you weigh up the pros and cons of different approaches. These principles will help you to consistently learn from the continual feedback loop and evolve your own measurement strategy, ultimately improving the performance of your brand.


Sonal Patel is managing director for Southeast Asia at Quantcast.

Related Articles

Just Published

1 day ago

Uproar: Are animal portrayals in ads a new brand risk?

Advertisers and agencies love animals, because animals sell. But a Year of the Tiger Gucci campaign that made activists growl shows that the definition of what’s appropriate may be evolving when it comes to using the world's fauna.

1 day ago

Mark Heap on ‘moving across the aisles’ to ...

Media agencies offer broadly the same services as one another, and use propositions like ‘good growth’ and ‘people first’ to establish an identity. But what do these mean, in practical terms, and how do they influence leadership strategies? Mark Heap takes us inside the industry.

1 day ago

The ride of the tiger: Feast your eyes on BMW's ...

While other brands make long, dramatic Chinese New Year films, the carmaker and TBWA's Bolt have programmed in a very different route: 90 seconds that's 'nothing but sheer joy'.

1 day ago

The Beijing Olympics: A non-starter for global sponsors

SHANGHAI ZHAN PODCAST: Beijing-based sports-marketing expert Mark Dreyer says the games will see largely Chinese brands targeting the China market, with many employing Chinese-American skier/model Eileen Gu.