“We use R” is a comment I used to only hear from recent graduates or some analysts working in the public sector. Both these sectors appeared to more readily accept open source software. But this does appear to be changing, over recent years I have heard a few commercial organisations share their interest in saving licence fees and moving to R for their analysts. Are you one of them? Do you have a favourite analysis software package, without which your team would not be as productive? If so, please share below.
Given the traffic this site now appears to get from customer insight leaders across the world, I thought I’d test the waters with you. Below is a quick one question survey to help us find out and share the analytics software used in your organisation today. It will be interesting to see how much has changed from the days when large company = SAS usage (if that has changed).
I have sought below to list all the “predictive analytics” software that I know is in use today, but do feel free to add others if you use them. I have chosen this category to distinguish from the mass of business intelligence or web analytics tools without equivalent statistical capabilities.
Do let me know if this topic is of interest and I can research more fully or share my own experience.
I wonder how many of you value being a storyteller, as one of the most valuable skills in your analysts or data scientists. Do you?
Even writing that it seems a strange thing to say, almost an oxymoron for such quantitative roles. Surely you can’t expect these specialists to also master the humanities?
However, as I look back over the pieces of analysis which have driven most change in the businesses I’ve served, it is those which told the most compelling story that made the biggest difference.
This of course is not really surprising at all. Our researchers and all those with any social science background will tell us that storytelling is deeply embedded in human societies. Over millennia we see examples of the most important truths for one generation to pass onto another being encoded in stories. (more…)
It has been interesting, that after several years of excitement around the topic of “gamification”, this year more commentators have suggested that it’s “game over”. I certainly agree that this concept has moved through the Gartner hype-cycle, into the wonderfully named “trough of disillusionment”.
However, that is the springboard for entering into the stages of pragmatic realism. My experience is that it is often once technologies or ideas reach this stage that those interested in just delivering results can begin to realise benefits (without the distraction of hype/fashion).
Even though I can see the points made in this Forbes article, I think that the evidence cited concerns a failure to revolutionise business more broadly. What has not yet been exhausted, in my view, is the potential for gamification to help with market research.
One growing issue springs to mind as needing help. I’m thinking of the challenge faced by any client-side researcher seeking representative sample for a large quant study. The issue is falling participation rates unless research is fun, interesting and rewarding. Coupled with the risk that some ways of overcoming this by agencies risk a higher skew toward “professional” research participants.
Gaining sufficient representative sample, that matches a companies own customer base demographic or segments, can also be important for experimentation. This is timely for Financial Services companies who are seeking to experiment with behavioural economics and need sufficient participation in tests to see choices made in response to “nudges”. So, here too, is a need to freshen up research with methods of delivery that better engage the consumer.
No doubt the hype will not be realised. But I hope that as the dust settles, customer insight leaders will not give up on the idea of gamification as a research execution media. Some pioneers like Upfront Analytics and others are seeing positive results. Let’s hope others get a chance to “play” with this.
Since my own experience, of piloting customer treatments based on Behavioural Economics (BE) hypotheses, I have been fascinated with the subject. Having spoken at a number of Chartered Insurance Institute events and written for their Journal on the subject, I know many insurers share this interest.
This is not surprising, given the pubic pronouncements by the Financial Conduct Authority (FCA). They are clearly expecting Insurers to engage, not just with how customers should behave rationally, but with how they actually make decisions. Financial Services companies are expected to help customers avoid detriment, given these well known biases.
Given all of that, I though it might be interesting to see what level of maturity (with using Behavioural Economics) there is amongst our readership.
Please answer the question below if you have responsibility for customer insight and work within a UK based Financial Services company.
I will of course share what I’ve learnt from these results and conversations with others in a later post.
Many commentators have recently debated the relative merits of Customer Effort Score (CES) verses Net Promoter Score (NPS). As a leader who remembers the controversy that surrounded NPS when it first came to dominance, the parallels are concerning. I still recall the effort wasted trying to win the battle to point out the flaws in NPS and lack of academic evidence, whilst in fact I was looking a gift horse in the mouth (I’ll explain that later). I would caution anyone currently worrying about whether or not CES is the “best metric” to remember the lessons that should have been learnt from “NPS wars”.
For those not so close to the topic of customer experience metrics, although there any many different metrics that could be used to measure the experience your customers’ receive, three dominate the industry. They are Customer Satisfaction (CSat), NPS and now CES. These are not equivalent metrics, as they measure slightly different things, but are all reporting on ratings given by customers to a single question. Satisfaction captures emotional feeling about interaction with the organisation (usually on a 5 point scale). NPS captures an attitude following that interaction, i.e. likelihood to recommend, against 0-10 scale with detractors (0-6) subtracted from promoters (9-10) to give a net score. CES returns to attitude about the interaction, but rather than asking about satisfaction it seeks to capture how much effort the customer had to put in to achieve what they wanted/needed (again on a 5 point scale).
The reality, from my experience (excuse the pun), is that none of these metrics is perfect and each has dangers of misrepresentation or simplification. I agree with Prof. Moira Clark of Henley Centre of Customer Management. When we discussed this, we agreed that ideally all three would be captured by an organisation. This is because satisfaction, likelihood-to-recommend & effort-required are different ‘lenses’ through which to study what you are getting right or wrong for your customers. However, that utopia may not be possible for all organisation, depending on volume of transactions and you capability to randomly vary metrics captured and order of asking.
But my main learning point from the ‘NPS wars’ experience over a couple of years, is the metric is not the most important thing here. As the old saying goes, “it’s what you do with it that counts”. After NPS won the war and began to be a required balanced scorecard metric for most CEOs, I learnt that this was not a defeat but rather a ‘gift horse’, as I referred to earlier. Because NPS had succeeded in capturing the imagination of CEOs, there was funding available to capture learning from this metric more robustly than was previously done for CSat. So, over a year or so, I came to really value the NPS programme we implemented. This was mainly because of its granularity (by product & touchpoint) and the “driver questions” that we captured immediately afterwards. Together these provided a richer understanding of what was good or bad in the interaction, enabled prompt response to individual customers & targeted action to implement systemic improvements.
Now we appear to be at a similar point with CES and I want to caution about being drawn into another ‘metric wars’. There are certainly things that can be improved about the way the proposed question is framed (I have found it more useful to reword and capture “how easy was it to…” or “how much effort did you need to put into…”). However, as I hope we all learned with NPS, I would encourage organisations to focus instead on how you implement any CES programme (or enhance your existing NPS programme) to maximise learning & action-ability. That is where the real value lies.
Another tip: Using learning from your existing research, including qualitative, can help frame additional questions to capture following CES. You can then use analytics to identify correlations. Having such robust regular quantitative data capture is much more valuable than being ‘right’ about your lead metric.
What’s your experience with CSat, NPS or CES? Do you share my concerns?
This book has a dull cover and lacks any colour graphics within its pages. So, if you spot it, you might not be enthused. However, persistence is rewarded, as there is much customer experience and customer insight leaders can learn from this book.
Written by a couple of leaders at Forrester Research, it provides the reader with an overview of everything to consider in order to improve customer experiences. As anyone who has worked in this area will know, that’s a tall order.
Peppers & Rodgers “Managing Customer Relationships” is usefully comprehensive but at 481 pages not a quick read. So, to provide this overview in only 224 pages is an achievement for Harley Manning and Kerry Bodine.
As I worked my way through this book, two things became the major benefits. The first is a set of frameworks to act as guides or checklists for action needed in different areas. First up is their definition of a Customer Experience Ecosystem Map, a useful term for ensuring you consider not just processes but also people, perspectives, culture, etc. Another is the structure of identifying six essential customer experience disciplines each with their own required practices (strategy, customer understanding, design, measurement, governance and culture). This risks “motherhood and apple pie”, but provides some sensible customer insight advice especially on measurement.
The other major benefit of this book is a large number of case studies contained within it, as examples of frameworks being put into practice. Given my background and clients within the Insurance industry, it was good to see 5 of these alongside the many other sectors covered. Their analysis of the threats to Allstate in the US and opportunities for Progressive is interesting and backed up by Customer Experience Index scores to date. Aviva’s focus on mapping customer journeys in China is also interesting, with the chance in emerging markets to start with customer experience strategy at an earlier stage.
Given I will be speaking at a conference in London next month, on the role of Customer Insight leaders in more senior positions than ever before, their chapter on ‘The Rise of the Chief Customer Officer’ is also interesting. Their research in US echoes my own experience in the UK, that CCOs (or CKOs – as I am more interested in customer insight leaders) are disproportionately common within Financial Services firms. Their findings about a bias toward COOs for B2B businesses also makes commercial sense.
I hope that review was useful, I share such a book because I believe the only point of generating customer insights is to act on them. This can sometimes be to deliver shorter term commercial returns, but longer term the real prize is for customer insight to be guiding the transformative work outlined in this book. Delivering and then sustaining significantly improved customer experiences,
This book is a relatively easy read, although at times resembling someone who talks too quickly at you. The volume of human interest stories included helps, as does the use of short chapters. Bite sized chunks for reading each day, is one way to look at them. I hope you find it useful.
Please do share your experience if you’ve read this work or alternatives.
Firstly, the normal health warning on these being only interim results. There are not yet sufficient votes with which to draw robust conclusions (hence the metaphor of a deserted Southwark station).
That said, with just over 80 votes now in, the initial results of our “What do you see?” survey of customer insight leaders is showing some interesting results.
With regard to the scope of the term “customer insight” almost all voters view this as covering research, analysis, modelling, segmentation and marketing effectiveness measurement, together with a consultancy service. Only slightly less popular is measurement of a primary customer metric (NPS, Satisfaction or Effort). The surprise to me is that only just over half would include data management or database marketing. I am writing for the next quarterly publication of DataIQ magazine on the importance to CI leaders of data teams, so it will be interesting to see if this trend continues.
Meanwhile, with regard to current organisational design, or which elements of the above currently report into the CI leader, it’s a different story. Less of you voted, so less robust conclusions. But for now, the theme seems to be more CI leaders have responsibility for research, NPS and marketing effectiveness measurement. Far fewer appear to have responsibility for behavioural analysis and customer data management. So, perhaps not as many companies as I hoped have yet seen the benefits of bringing research and analysis together in one function.
It’s encouraging for my new business to see overwhelming interest in external support for CI leaders, with the most popular service being training for their customer insight team. So, time for me to get ready that training material.
Thanks again to those who participated. If you haven’t voted yet, please do and I’ll share final results once votes are high enough to feel more representative of this community.
Finally, do let me know if you’ve a question that you would like ask other customer insight leaders.