Sonal Patel
Dec 8, 2021

Finding digital measurement success, part 2: Attribution and incrementality

Too often marketers conflate these two terms. Quantcast's SEA MD disentangles them.

(Shutterstock)
(Shutterstock)

In our first installment in this series ("Finding digital measurement success, part 1: Cohorts vs clicks"), we established that it’s important to use a cohort of metrics to measure success, but there are two additional methods that savvy marketers employ to truly quantify success: attribution and incrementality. While these terms are widely used to solve the measurement challenge, they are often conflated, causing confusion. 

Let’s start by defining what they mean

Attribution and incrementality quantify different things:

Attribution looks at the touch points along the journey that have impacted purchase. It’s correlative rather than causal because while it tries to assign credit, it cannot explicitly give it to any one touch point for the sale. It answers the question: “What touch points were associated with a consumer conversion?”

Incrementality measures the impact of a single variable on an individual user’s behavior. For digital display marketing, it is most commonly used to measure the impact of a branded digital ad (exposed group) against a Public Service Announcement (PSA) ad (control group). The lift is measured as the percent difference between the two. Incrementality demonstrates the value of advertising, helping to answer the question: “Did my ad result in a purchase?”

A deep dive into attribution

While nuanced in its own way, it’s also important to understand the challenges and solutions with attribution. In the example below, there are so many touch points within a consumer’s buying journey today, and that’s where it becomes difficult to understand which advertising partner has helped to drive the final conversion.

This overview of a journey that ‘Sarah’ might take reveals the challenges of conversion and performance metrics:

To mitigate this, the first thing marketers need to do is apply common sense: what do you expect to happen, and do your campaign results align with your expectations?

The next step is to think about measurement in ‘shapes’ rather than individual numbers (e.g. an individual CPA), as these singular figures often hide the reality and complexity of the campaign results. You might find it a lot easier to evaluate the success of tactics when you don’t consolidate results to one single number; think of an ad campaign as a portfolio of ad impressions that aren’t in isolation.

Looking at incrementality

Incrementality testing compares the marketing results between a test group and a control group, which can help advertisers better understand if the KPIs are a direct result of their own campaigns or extraneous effects.

At Quantcast, we define incrementality testing as measuring how a specific marketing event was casually influenced by a media channel or tactic, in this case display, over a set time period and budget.

The challenges here are inventory bias, cookie churn and gamed benchmarks.

  • Publisher inventory bias is caused when ad exchanges and publishers are selective about the inventory they will serve on their sites, which affects the performance of creatives differently.
  • Cookie churn problems stem from cookies moving from control to treatment (and vice-versa), potentially driving lift down to zero because it scrambles the causal signal.
  • Poor or gamed benchmarks occur because your control (or baseline) will drastically impact your results. Some people use non-viewable impressions as a control, but this adds in a new behaviour that could skew results.

To help solve this, we recommend deploying adaptive ‘block’ or ‘allow’ lists to address publisher inventory bias, experimenting on traffic that is trackable to address cookie churn problems, running one consistent study across vendors to set a level playing field with consistent benchmarks, and aligning your measurement and attribution criteria.

Finding digital measurement success with cohorts

Reaching and influencing audiences, cutting through the noise, and coming up with a value proposition that can steer behaviour is incredibly challenging. Reducing this to a single metric is ideal, but likely impossible as measurement continues to change as the approach to digital advertising becomes increasingly multifarious.

As mentioned in "Finding digital measurement success, part 1: Cohorts vs clicks", every metric you look at, every audience you try to reach, every methodology you use, must all be evaluated as part of a cohort, ensuring you weigh up the pros and cons of different approaches. These principles will help you to consistently learn from the continual feedback loop and evolve your own measurement strategy, ultimately improving the performance of your brand.


Sonal Patel is managing director for Southeast Asia at Quantcast.

Source:
Campaign Asia

Related Articles

Just Published

57 minutes ago

Amazon CEO Andy Jassy on using AI to win over ...

The e-commerce giant’s CEO revealed fresh insights into the company's future plans on all things consumer behaviour, AI, Amazon Ads and Prime Video.

2 hours ago

James Hawkins steps down as PHD APAC CEO

Hawkins leaves PHD after close to six years leading the agency, and there will be no immediate replacement for him.

3 hours ago

Formula 1 Shanghai: A watershed event for brand ...

With Shanghai native Zhou Guanyu in the race, this could be the kickoff to even more fierce positioning among Chinese brands.

6 hours ago

Whalar Group appoints Neil Waller and James Street ...

EXCLUSIVE: The duo will lead six business pillars and attempt to win more creative, not just creator, briefs with the hire of Christoph Becker as chief creative officer.