When you talk to most database marketers, they will agree with the importance of using a propensity model to target direct marketing.
Over the last couple of decades that form of targeting model has become de rigueur. Indeed, many case studies will show what a difference it can make to marketing returns (sometimes even as good as Pareto’s fabled 80% of sales for 20% of marketing spend).
Such modelling has also become more widespread across businesses & sectors, as software innovations have ‘lowered the bar’ on skills needed to build such models.
Once solely the province of statisticians building regression models using SAS etc, developments in ‘automated modelling’ software have held out the promise of less numerate marketers building good enough models.
I don’t know about your experience with this proliferation of modelling, but I have some concerns. Based on what I’ve seen in a number of different businesses. More firms may now have ’targeting models’, but all may not be well in this new easy modelling paradise.
What could go wrong in modelling paradise?
My concern is teams & leaders who appear to have sleep-walked their way into blind use of models, without sufficient thought or clarity as to what they are modelling & if those models are up to the job.
There can be a number of dangers here, including not spotting ‘over fitting’, models that degrade quickly and a range of the well trailed concerns that have been voiced since KXeN first came to prominence. However, I want to focus on two others I’ve noticed in a few businesses this year.
The first is the risk that analysts inadvertently model brand appeal/loyalty, rather than marketing response. The second is the risk that models can be built, who’s top decile response rate is still too low to be helpful, due to historically low success rates.
Both these scenarios may indicate that modellers would be better to start again, to rethink what they are trying to achieve & the solution that works best. There are other approaches that may help.
So, let’s consider both of those issues in a bit more detail.
Are you clear about your objective?
It’s so easy to consider this step blindingly obvious. You are modelling the people more likely to buy what you are marketing, right? Well, yes & no.
If you take that approach you may have the unintended consequences of just model building based on people who purchased your product verses those who did not (irrespective of whether or not they were marketed). Such models have the problem of combining/averaging across a number of different potential casual relationships.
If you don’t strip out those who were never marketed, you will be picking up those who would buy anyway (perhaps due to brand appeal, loyalty or just inertia/convenience).
So, step back and think how you intend any targeting model to be used. That should help you set a more precise objective function & thus select a better modelling dataset.
Are you planning to use the model to target an email marketing campaign? Then the real question is which people are more likely to respond positively to your campaign (perhaps combined with purchase or even purchase + retention period). In that example, you may want to limit your modelling dataset to people who previously received email marketing (rather than all media); if you have sufficient data & past respondents within it.
That brings us on nicely to the next pitfall.
Do you yet have sufficient response data to build a model?
Given the power of modern modelling software (including automated modelling solutions, like SAP’s rebranded KXeN), it is often possible to build a model with limited historical data. However, I would caution against assuming that your ability to build a robust model is a guide as to whether or not it is fit for purpose.
Model quality assurance testing matters, of course. Whether you are considering SAP’s Ke & Kr metrics (for their mathematical optimisation models) or the Gini coefficient for a logistic regression model, you need to be assured that models are robust. Don’t be tempted to skip having a test dataset as well as a development one. You need to ensure your model is not overfitted to one set of data, as well as its ability to discriminate well across ‘deciles’.
But, there is another pitfall that I’ve seen more than once in marketing teams. That is blindly using a model because it is statistically robust, even though the actually response rate predicted to be achieved by even the top decile is low.
This is like another example of the classic pitfall for analysts, of looking only a relative performance (e.g. over indexed variables) and not also the absolute numbers (which may still be a minority). Database Marketers need to apply common sense to their application of marketing models.
That is easy to say from the comfort of my armchair. I recognise that, day-to-day, leaders of such teams are under pressure to provide improved targeting for under performing marketing campaigns. A model that outperforms ‘random’ can seem like a no brainer to use instead of no model.
Normally the cause of this situation is historically low response to past campaigns. If response rates are really low in the data, discriminating models can only take you so far. Something else is needed. Sometimes that can be a radically different proposition (perhaps based on insight generation). But it can also be use of an alternative targeting approach to generate more responses.
Could event triggers help you?
We’ve mentioned before the power of timing in marketing communication targeting. Relevancy for consumers is often related to talking to them about something at the right time, or in the right context.
Targeting marketing based on event triggers also has the advantage of not requiring a high volume of past response data. Akin to the process of insight generation, a range of sources (including transactional analysis & consumer research) can help identify potential trigger events that could be appropriate times for specific offerings.
Robust analysis should still be undertaken to confirm such hypotheses. Do cohorts of consumers, for whom these events have occurred, demonstrate a higher response rate than others?
The advantage of event triggers is the potential to identify many that can be used for multiple smaller targeted campaigns, each of which should generate increased response. If sufficient volume can be generated then the ideal scenario can eventually be reached, that is targeting using a trigger combined with a filtering propensity model.
Have you avoided sleep-walking into your marketing targeting methods?
Talking of sleep walking, I’m conscious that I’ve gone on in this post. I hope it’s been useful and not induced a coma-like-state in you!
How are you doing regarding the above concerns? Are you clear on the purpose for your propensity model? Have modelling datasets been constructed appropriately to answer that question? Do you even have enough historical response data to construct models that will deliver sufficient Marketing ROI to be worthwhile? Would you be better to consider alternative targeting approaches in the meantime?
Please do share your experiences as well. Real life examples can help us all.
Meanwhile, best wishes with your modelling career, whatever form it takes!