Today, 17 senior and well-respected specialists with deep expertise in market mix modelling – including Les Binet, Grace Kite, and Louise Cook – have published an open letter. It warns that the algorithms behind many new automated and platform-based solutions are too simplistic and not properly adapted to the specific businesses being analysed.
Louise Cook, director at Holmes and Cook said “The complexity of current media schedules makes evaluation impossible without modelling. But the push for faster and cheaper solutions is leading to approaches which are simply not able to embrace the complexities of brand response. These approaches risk misleading rather than enlightening.”
The risk that marketers take when using them is investing in significant set up time and cost, only to be left with findings that they don’t believe and can’t use.
Mike Cross, co-founder at measure monks, said “Pitch decks from can be very convincing, but we’ve heard stories where 10 months of set up were followed by garbage results that were completely unusable. Making the wrong decisions off a bad model costs up to -40% of your media driven revenue, versus MMM applied properly which can deliver +30%. That’s pretty costly to a CMO in austere times.”
While the letter acknowledges the important place for new technology in marketing evaluation, the experts warn that marketing plays out in contexts that are too complex for current AI and machine learning techniques to properly model without human intervention.
Grace Kite, founder at magic numbers, said “Artificial intelligence isn’t enough for this task, real people are needed. Machines can’t yet understand the nuances of the situation, identify things that are missing in data, or help people get comfortable that results are reliable for important decisions.”
The advice for marketers undertaking evaluation using MMM is not to be too trusting. To get three quotes and to ask each of the three a series of questions, below.
- Does the model include factors like price, economy, seasonality, covid? Will you report on how these things affect our business?
- Does the model cover at least 2 years of data, preferably 3?
- Do you measure how upper funnel ads like TV & YouTube affect outcomes in lower funnel ads like PPC?
- Will you share advertising response curves with our media planners?
- What would happen if results came back and we didn’t believe them, say because they didn’t line up with something else we knew?
- Will you be able to explain the model to our Finance people? Will your numbers line up with theirs?
- Could our analysts who understand regression look under the bonnet at the model and all the tests and statistical due diligence?
- Can you demonstrate to them that your models are good at forecasting?
The full text of the open letter follows…
Dear friends,
We evaluation experts care deeply about you getting the best possible response from your advertising.
We’re in the MMM/econometrics business because we love crunching the numbers and solving a puzzle. But we’re also here because we want to see change in the real world, in the form of growth in your businesses.
It’s in this spirit that we’re writing to you. To warn you. Because not all analysis that looks solid from the outside is equally good.
The thing is, your businesses are unique and multi-faceted organisations, operating imperfectly in contexts no other firm has experienced before. And good evaluation that can genuinely untangle sales driven by ads, has to account for that complexity properly.
Equally, and we hope you don’t mind us saying this, even though you are brilliant at a lot of things, you aren’t always that good with data. You sometimes record it in messy ways and you don’t always know for sure what each number means or how well it’s measured.
All this means there is no single approach to evaluation that works in every circumstance. The ways that your world can be different or your data can be mucky or missing are too numerous and curious and complex to handle with “if this then that” code.
Even AI can’t identify what isn’t in the dataset it’s looking at. And it can’t talk to your team about the time you misprinted the barcodes and then innovate a way to use your data to capture the effect. Or find out that you ran a large radio campaign independently and didn’t tell anyone.
Don’t get us wrong. We love a bit of code as much as the next person, and probably more. We all use it to automate data collection and prep, and we automate producing standard outputs too.
But there are some bits that can’t be automated. Making sure models really reflect your business, getting the nuances of what happened in the past straight, and working with your people to get findings about big expenditures acted on are all things that require real life human beings.
We’re sorry we didn’t flag this with you enough when last-click attribution was new. It wasn’t because we didn’t care, it was that every time we tried to say, we were accused of being dinosaurs or luddites. We’re gutted about how much you spent serving ads to people who were already on their way to you.
So that’s why we wanted to write to you now. Because there’s danger on the horizon again.
With cookies disappearing, platform-based, automated versions of MMM are coming to market to solve the problems with attribution.
We’ve looked at the algorithms and we have to tell you, they’re much simpler than they need to be. With MMM, every time you don’t include something that matters you get a wrong number for the effect of advertising, and there are models out there that don’t even include COVID-19 or price.
We keep getting people that have invested in these platforms coming to us saying that the numbers make no sense, that no-one believes them, that the 9-12 months it took to get set up were wasted.
Now, we’re not that vocal a bunch usually, and we don’t often offer advice unless we’re asked for it. But if you want to know what we suggest you do, it’s shop for MMM like you’re buying a new kitchen. Ask around for recommendations, get at least 3 quotes, and don’t be too trusting.
Ask these questions and look to see if the person you’re talking to squirms before answering yes:
- Does the model include factors like price, economy, seasonality, covid? Will you report on how these things affect our business?
- Does the model cover at least 2 years of data, preferably 3?
- Do you measure how upper funnel ads like TV & YouTube affect outcomes in lower funnel ads like PPC?
- Will you share advertising response curves with our media planners?
- What would happen if results came back and we didn’t believe them, say because they didn’t line up with something else we knew?
- Will you be able to explain the model to our Finance people? Will your numbers line up with theirs?
- Could our analysts who understand regression look under the bonnet at the model and all the tests and statistical due diligence?
- Can you demonstrate to them that your models are good at forecasting?
We’ll be rooting for you.
Signed:
Les Binet (Adam & Eve DDB)
Grace Kite (Magic Numbers)
Louise Cook (Holmes and Cook)
Mike Cross (Measure Monks)
Andrew Deykin (D2D)
Matt Andrew (Ekimetrics)
Neil Charles (ITV)
Sarah Stallwood (Magic Numbers)
Sara Jones (Pearl Metrics)
Jamie Gascoigne (Measure Monks)
Joy Talbot (Magic Numbers)
Simeon Duckworth (UCL)
Sally Dickerson (Benchmarketing)
Stuart Heppenstall (D2D)
Dominic Charles (Wavemaker)
Steve Hilton (Measure Monks)
Tim Fisher (Measure Monks)