Sonal Patel
Dec 8, 2021

Finding digital measurement success, part 2: Attribution and incrementality

Too often marketers conflate these two terms. Quantcast's SEA MD disentangles them.


In our first installment in this series ("Finding digital measurement success, part 1: Cohorts vs clicks"), we established that it’s important to use a cohort of metrics to measure success, but there are two additional methods that savvy marketers employ to truly quantify success: attribution and incrementality. While these terms are widely used to solve the measurement challenge, they are often conflated, causing confusion. 

Let’s start by defining what they mean

Attribution and incrementality quantify different things:

Attribution looks at the touch points along the journey that have impacted purchase. It’s correlative rather than causal because while it tries to assign credit, it cannot explicitly give it to any one touch point for the sale. It answers the question: “What touch points were associated with a consumer conversion?”

Incrementality measures the impact of a single variable on an individual user’s behavior. For digital display marketing, it is most commonly used to measure the impact of a branded digital ad (exposed group) against a Public Service Announcement (PSA) ad (control group). The lift is measured as the percent difference between the two. Incrementality demonstrates the value of advertising, helping to answer the question: “Did my ad result in a purchase?”

A deep dive into attribution

While nuanced in its own way, it’s also important to understand the challenges and solutions with attribution. In the example below, there are so many touch points within a consumer’s buying journey today, and that’s where it becomes difficult to understand which advertising partner has helped to drive the final conversion.

This overview of a journey that ‘Sarah’ might take reveals the challenges of conversion and performance metrics:

To mitigate this, the first thing marketers need to do is apply common sense: what do you expect to happen, and do your campaign results align with your expectations?

The next step is to think about measurement in ‘shapes’ rather than individual numbers (e.g. an individual CPA), as these singular figures often hide the reality and complexity of the campaign results. You might find it a lot easier to evaluate the success of tactics when you don’t consolidate results to one single number; think of an ad campaign as a portfolio of ad impressions that aren’t in isolation.

Looking at incrementality

Incrementality testing compares the marketing results between a test group and a control group, which can help advertisers better understand if the KPIs are a direct result of their own campaigns or extraneous effects.

At Quantcast, we define incrementality testing as measuring how a specific marketing event was casually influenced by a media channel or tactic, in this case display, over a set time period and budget.

The challenges here are inventory bias, cookie churn and gamed benchmarks.

  • Publisher inventory bias is caused when ad exchanges and publishers are selective about the inventory they will serve on their sites, which affects the performance of creatives differently.
  • Cookie churn problems stem from cookies moving from control to treatment (and vice-versa), potentially driving lift down to zero because it scrambles the causal signal.
  • Poor or gamed benchmarks occur because your control (or baseline) will drastically impact your results. Some people use non-viewable impressions as a control, but this adds in a new behaviour that could skew results.

To help solve this, we recommend deploying adaptive ‘block’ or ‘allow’ lists to address publisher inventory bias, experimenting on traffic that is trackable to address cookie churn problems, running one consistent study across vendors to set a level playing field with consistent benchmarks, and aligning your measurement and attribution criteria.

Finding digital measurement success with cohorts

Reaching and influencing audiences, cutting through the noise, and coming up with a value proposition that can steer behaviour is incredibly challenging. Reducing this to a single metric is ideal, but likely impossible as measurement continues to change as the approach to digital advertising becomes increasingly multifarious.

As mentioned in "Finding digital measurement success, part 1: Cohorts vs clicks", every metric you look at, every audience you try to reach, every methodology you use, must all be evaluated as part of a cohort, ensuring you weigh up the pros and cons of different approaches. These principles will help you to consistently learn from the continual feedback loop and evolve your own measurement strategy, ultimately improving the performance of your brand.

Sonal Patel is managing director for Southeast Asia at Quantcast.

Related Articles

Just Published

19 hours ago

Rahat Kapur joins Campaign Asia-Pacific

As editor, she will lead the publication's daily coverage and manage its team of journalists in the region.

19 hours ago

Top advertisers’ KOL spend in China up tenfold ...

Ebiquity's new study indicates that leading brands in China steered more KOL advertising to Douyin and Red but less on WeChat and Weibo.

19 hours ago

Asia-Pacific Power List 2023: Kainaz Gazder, P&G

With a focus on building brand discipline capability and creating campaigns that reflect the diverse daily lives of people in the region, Gazder's leadership has led to P&G's success in delivering sustained growth and brand value creation.

20 hours ago

Consumers in APAC continue to be keen to share ...

TOP OF THE CHARTS: Twilio research digs into consumers' mindsets around significant data privacy and digital marketing shifts, including the move away from cookies.