It has been interesting, that after several years of excitement around the topic of “gamification”, this year more commentators have suggested that it’s “game over”. I certainly agree that this concept has moved through the Gartner hype-cycle, into the wonderfully named “trough of disillusionment”.
However, that is the springboard for entering into the stages of pragmatic realism. My experience is that it is often once technologies or ideas reach this stage that those interested in just delivering results can begin to realise benefits (without the distraction of hype/fashion).
Even though I can see the points made in this Forbes article, I think that the evidence cited concerns a failure to revolutionise business more broadly. What has not yet been exhausted, in my view, is the potential for gamification to help with market research.
One growing issue springs to mind as needing help. I’m thinking of the challenge faced by any client-side researcher seeking representative sample for a large quant study. The issue is falling participation rates unless research is fun, interesting and rewarding. Coupled with the risk that some ways of overcoming this by agencies risk a higher skew toward “professional” research participants.
Gaining sufficient representative sample, that matches a companies own customer base demographic or segments, can also be important for experimentation. This is timely for Financial Services companies who are seeking to experiment with behavioural economics and need sufficient participation in tests to see choices made in response to “nudges”. So, here too, is a need to freshen up research with methods of delivery that better engage the consumer.
No doubt the hype will not be realised. But I hope that as the dust settles, customer insight leaders will not give up on the idea of gamification as a research execution media. Some pioneers like Upfront Analytics and others are seeing positive results. Let’s hope others get a chance to “play” with this.
Many commentators have recently debated the relative merits of Customer Effort Score (CES) verses Net Promoter Score (NPS). As a leader who remembers the controversy that surrounded NPS when it first came to dominance, the parallels are concerning. I still recall the effort wasted trying to win the battle to point out the flaws in NPS and lack of academic evidence, whilst in fact I was looking a gift horse in the mouth (I’ll explain that later). I would caution anyone currently worrying about whether or not CES is the “best metric” to remember the lessons that should have been learnt from “NPS wars”.
For those not so close to the topic of customer experience metrics, although there any many different metrics that could be used to measure the experience your customers’ receive, three dominate the industry. They are Customer Satisfaction (CSat), NPS and now CES. These are not equivalent metrics, as they measure slightly different things, but are all reporting on ratings given by customers to a single question. Satisfaction captures emotional feeling about interaction with the organisation (usually on a 5 point scale). NPS captures an attitude following that interaction, i.e. likelihood to recommend, against 0-10 scale with detractors (0-6) subtracted from promoters (9-10) to give a net score. CES returns to attitude about the interaction, but rather than asking about satisfaction it seeks to capture how much effort the customer had to put in to achieve what they wanted/needed (again on a 5 point scale).
The reality, from my experience (excuse the pun), is that none of these metrics is perfect and each has dangers of misrepresentation or simplification. I agree with Prof. Moira Clark of Henley Centre of Customer Management. When we discussed this, we agreed that ideally all three would be captured by an organisation. This is because satisfaction, likelihood-to-recommend & effort-required are different ‘lenses’ through which to study what you are getting right or wrong for your customers. However, that utopia may not be possible for all organisation, depending on volume of transactions and you capability to randomly vary metrics captured and order of asking.
But my main learning point from the ‘NPS wars’ experience over a couple of years, is the metric is not the most important thing here. As the old saying goes, “it’s what you do with it that counts”. After NPS won the war and began to be a required balanced scorecard metric for most CEOs, I learnt that this was not a defeat but rather a ‘gift horse’, as I referred to earlier. Because NPS had succeeded in capturing the imagination of CEOs, there was funding available to capture learning from this metric more robustly than was previously done for CSat. So, over a year or so, I came to really value the NPS programme we implemented. This was mainly because of its granularity (by product & touchpoint) and the “driver questions” that we captured immediately afterwards. Together these provided a richer understanding of what was good or bad in the interaction, enabled prompt response to individual customers & targeted action to implement systemic improvements.
Now we appear to be at a similar point with CES and I want to caution about being drawn into another ‘metric wars’. There are certainly things that can be improved about the way the proposed question is framed (I have found it more useful to reword and capture “how easy was it to…” or “how much effort did you need to put into…”). However, as I hope we all learned with NPS, I would encourage organisations to focus instead on how you implement any CES programme (or enhance your existing NPS programme) to maximise learning & action-ability. That is where the real value lies.
Another tip: Using learning from your existing research, including qualitative, can help frame additional questions to capture following CES. You can then use analytics to identify correlations. Having such robust regular quantitative data capture is much more valuable than being ‘right’ about your lead metric.
What’s your experience with CSat, NPS or CES? Do you share my concerns?
Segmentation is one of those customer insight and marketing terms which divide opinion. Leaders have their favourite approaches. Boards can be ardent fans of the need for a segmentation, or complete unbelievers in what is perceived as marketing “spin“. One of the reasons for this appears to be, the mixed fortunes of implementing segmentations. Some companies extol real benefits, and focus that have come as a result, whilst others bemoan wasted spend with consultants and agencies.
My own experience is that appropriate segmentations can add real value and enable a clearer understanding to focus on appropriate target audiences. But a few misconceptions need to be addressed.
The chief misconception I would cite is, the belief that any company or market only needs one segmentation. One of the guiding factors for selecting the most appropriate segmentation approach is the purpose for which that model will be used. A segmentation to guide market participation strategy, is a very different challenge, to one for new proposition development, or to target different customer treatments. For this reason, it can be beneficial for a company to have more than one way of segmenting it’s customers (even if one is considered primary when seeking to embed in culture of organisation). One analogy for this is the benefit of having a Rubik’s cube set of segmentations for decision making.
Once the challenge of identifying the purpose of a segmentation is overcome, using incisive questioning, then a CI leader needs to select the most appropriate tool for the job. Here there does appear to be a degree of fashion influencing choices over the years. Many years ago, simple demographic segmentations were popular and can still perform a useful function. At the height of influence from market research teams, attitudinal segmentations were favoured and are also more viable than many believe. Since the success of Dunn Humby and others, behavioural segmentations took centre stage. Directors, particularly finance directors can favour value-based segmentation and operations directors can favour simpler life stage/“needs based” segmentations.
As all these segmentations have had their day, and still have their advocates, it is not surprising to find more organisations these days with hybrid segmentations. Popular combinations for hybrids appear to be value + life stage or behavioural/trigger + value based segmentations. Having once achieved developing a rich attitudinal segmentation, from substantial quant research and then producing predictive models to overlay this onto a data warehouse for targeting – I regret how much attitudinal segmentations are dismissed nowadays.
However, my guidance to customer insight leaders is, to be aware of as many potential approaches as possible, and then to focus your efforts on being clear as to the purpose for any one segmentation. At the end of the day, it is not a ‘universal truth’ about customers, it is just a model to enable appropriate action.
I was struck by this graphic published by Visually that was trending on Twitter a couple of days ago. Although about marketers, it set me pondering about today’s hybrid Customer Insight leader.
Apart from being an eye catching infographic and ringing true as to the challenge for modern marketers, it prompts an equal or bigger challenge for insight leaders. Over the years this role has evolved into one requiring CI leaders to have an even more hybrid mix of talents than their marketing peers.
It may seem like one of the curses of modern corporations but org design and regular reorganizations are now a fact of business life. I’m sure as an insight leader you will have seen your fair share.
As you’ve risen up the hierarchy you’ve probably changed in your role with regard to these events; from recipient to author. If you haven’t experienced this then I would encourage you to seek to be an author of such change.
From my experience two major opportunities exist for customer insight functions in this regard.
The first is to bring together the different technical areas who can best collaborate to provide deeper and more actionable insights. These include teams that are often located in different functional “silos”.
In line with my definition of Customer Insight, I would recommend bringing together: Customer Data, Analysis & Modelling, Research and Database Marketing teams. Suitably integrated and with an outcome focussed culture, these teams can together for an ‘Insight Engine‘ that produces not just technical output but actions that result in both commercial impact and improved customer experiences. (more…)
I believe in the importance of data visualisation, both because most people can more readily understand a visual representation than tables of numbers and because it is a useful language with which to communicate not just analysis but story. In other words, the challenge to appropriately visualise data or analysis, encourages the analyst to get closer to insights.
Anyway, I’ll blog more on that wider topic another time, for now I just wanted to share links to two agencies whose work on infographics have impressed me. If you’ve not come across them before, see if these spark any creative ideas…
There is always a risk that fashion obscures function, so I am aware of the risk that some people now equate data visualisation with infographics, which would also be a mistake in my book. So, as promised, more on data visualisation to following a later post, with the obligatory reference to Edward Tufte.
For now, please do feedback with your experience of infographics. Any tips?