Incrementality is on everyone’s lips right now, and that’s a positive shift. It signals that more teams are finally asking the core question: Are our initiatives truly driving business impact, or are we just getting better at claiming credit? The downside is that many long-standing mistakes are sticking around — they’re just happening more often and with bigger budgets behind them. As incrementality becomes a foundational element of performance marketing measurement, here are three pitfalls to steer clear of.
Mistake 1: Not clearly defining what you want to learn
Problems often begin when a team says, “We want to test Meta,” or “We’re running a PMax lift study,” and the thinking effectively ends there. There’s no precise explanation of what decision the test is supposed to guide or what success would actually look like. Then the results arrive. The iCPA or iROAS doesn’t line up with attribution. A confidence interval shows a range of possible outcomes instead of a single, tidy number. People are caught off guard. This usually happens when teams rush into testing without carefully considering the purpose. Before planning a test, teams should be able to answer a few straightforward questions in plain language: What, specifically, are we trying to learn? What prompted this question in the first place? What will we do differently if we learn X, and what will we do differently if we learn Y? Having these answers upfront makes interpreting the results far easier. In an ideal scenario, there’s a decision tree that maps…