Many years back I worked as an attribution consultant for emerging brands. It was the first time in my career that I was truly wearing a client’s jersey. Neither a vendor nor an agency, I was actually on the brand side, supporting marketing teams with advanced marketing measurement.
Now I could viscerally feel what clients had been saying for years: “I cannot trust attribution reporting for decision-making.” We are not talking about last click here. Multi-touch attribution reporting still had major gaps. It could not report on incrementality, it could not do mobile media, it could not deal with walled gardens, it could not deal with TV, it was not great at direct mail.
For the clients with whom I was working, that was 90 percent of their budgets. Classic multi-touch attribution was not going to fly. It was time to go back to the drawing board for a completely different approach.
A/B testing for incrementality on media campaigns turned up as the top candidate to try. Marketers were already using A/B testing on landing pages, creatives and a plethora of other functions. It was already happening in bits and pieces on the media side. On re-marketing, for example, some forward-thinking marketers would split the CRM file into two cohorts, use one as the holdout and the other for activation. Then they would check the CRM back end for transactions against the cohorts.
At the end of the test, results would read something like “Re-marketing is 6 percent incremental overall on e-commerce orders, but in January it was 12 percent incremental, and in March it was 14 percent incremental.” Independent of one another, those results are not statistically significant.
It turns out, incrementality — like conversion rates — is not just one number forever. It has meaningful variations from season to season. It became apparent that marketers needed a testing methodology that was “always on.”
From my experience over the years, I knew any new approach would require:
- A multivariate framework like Design of Experiments (DoE), and simple A/B testing was not enough for most tactics;
- Independent DoEs for each tactic, depending on how they are activated (e.g., Facebook Prospecting, Retargeting, Catalog Housefile, Catalog Rental);
- Standardized design for each tactic so that it could accommodate channel-specific best practices while being configurable to meet the brand’s learning objectives for that tactic;
- Scalable technology to automate each experiment; and
- Results from DoEs integrated with vendors’ performance reporting to make it actionable.
We are still learning every day. There are miles to go before we sleep, but today when I see brands that work with Measured scale into Facebook for prospecting using results from the always-ON Facebook-Prospecting DoE, I feel something I haven’t felt in a while as a measurement professional: job satisfaction!