helping you master customer insight leadership

How can you better influence your top table C-Suite team?

top table

As more Customer Insight leaders rise in seniority within blue chip companies, do they have the skills to influence at the top table?

This is not just for Customer Insight Directors (CID), although that role and it’s American cousin (CKO, Chief Knowledge Officer) are appearing in more and more companies.

Whether or not you have risen to the seniority of being called a CID, you are hopefully finding that your executives want to hear from you. So, when you get that call or a regular appointment at the top table, what should you do?

(more…)

Why should Behavioural Economics matter to you?

Behavioural EconomicsSince presenting at their regional events and writing an article for the CII’s Journal, I’ve continued to hear a demand for more understanding of Behavioural Economics (BE).

It appears the majority of insurers have delegated this challenge to their risk & pricing teams and few are engaging actively with their marketing and customer insight teams.

I think this is a missed opportunity, not just for better compliance with FCA expectations, but also for the commercial gains to be made from better designed communications.

That said, I suspect the majority of you have at least heard of Behavioural Economics. In recent years, the success of popular books on the subject, have ensured plenty of media coverage and social media debate on its implications.

Easy to read books, as introductions to the subject, have included “Nudge” by Richard Thaler & Cass Sunstein. More comprehensive and challenging is a classic text like “Thinking Fast & Slow” by Daniel Kahnemann. Both are well worth reading but there are now many others to choose from.

What makes this subject of greater relevance to the Financial Services industry, however, is the influence of Behavioural Economics on the thinking of both the UK Government and the Financial Conduct Authority (FCA). Government policy is being influenced by the work of their “nudge unit”. Meanwhile, the FCA has commented that it expects companies to consider how their customers actually make decisions. (more…)

Leadership coaching for you?

Coaching at workHave you experienced the benefits of coaching? Years ago UK business leaders appeared to just see this as an American business fad (for a culture who have also embraced the benefits of therapists and given us great TV like “In Treatment”). However, over the last decade more & more UK businesses have embraced executive coaching and the academic evidence for efficacy has grown substantially. Even in 2005, 88% of UK organisations reported using coaching and by 2009, 93% of US organisations.

The next revolution in coaching for businesses is the expansion of coaching to a wider leadership population. Once the preserve of CEOs or main board members, progressive businesses are now seeing the benefits of expanding to all directors, talent pipeline candidates or in some cases team coaching for the wider organisation. My personal interest is in the benefits of coaching for the rising star that is today’s Customer Insight Leaders. As I have blogged before, there is a growing trend to create Customer Insight Director or Chief Knowledge Officer roles, often for individuals who have never held C-Suite level responsibilities before. Such leaders are ideal candidates for coaching, not because of any deficits, but rather to ensure that they perform as well as possible and achieve the challenging goals for this new strategic focus.

So, what does coaching entail? Very briefly, the term covers a multitude of approaches and has many possible definitions. But most experts now agree that executive coaching can be defined as: “A relationship based intervention. Its focus is on the enhancement of personal performance at work through behavioural, cognitive and motivational interventions used by the coach, which provide change in the client.

That more academic definition hints at the fact of multiple models or techniques which can be used, where helpful, to facilitate sessions. The qualification that I’m completing on Executive Coaching includes learning coaching models including: Goal-Orientated; Cognitive Behavioural; Positive Psychology and Neuro Linguistic Programming. My own experience of coaching executives has taught me that different models can be appropriate at different times, with different clients, in different organisational contexts. The most important skill is still genuine active listening, but frameworks to help guide sessions and clear goals to be achieved do both help.

I’m encouraged by the positive messages being given by a number of organisations with regard to the importance of coaching (including ones as diverse as Network Rail and Mencap in this month’s “Coaching at Work” magazine). However, I have not yet seen this commitment applied to the Customer Insight leadership population. I hope that change will come and I am focussing part of my business on helping to meet that need.

Have you seen the benefits of coaching or mentoring in your leadership role? I’d love to hear more about your experience of this emerging profession.

Why you need your analysts to be storytellers and how to develop them

storytellers

I wonder how many of you value being a storyteller, as one of the most valuable skills in your analysts or data scientists. Do you?

Even writing that it seems a strange thing to say, almost an oxymoron for such quantitative roles. Surely you can’t expect these specialists to also master the humanities?

However, as I look back over the pieces of analysis which have driven most change in the businesses I’ve served, it is those which told the most compelling story that made the biggest difference. (more…)

Is research game for gamification?

iphoneIt has been interesting, that after several years of excitement around the topic of “gamification”, this year more commentators have suggested that it’s “game over”. I certainly agree that this concept has moved through the Gartner hype-cycle, into the wonderfully named “trough of disillusionment”.

However, that is the springboard for entering into the stages of pragmatic realism. My experience is that it is often once technologies or ideas reach this stage that those interested in just delivering results can begin to realise benefits (without the distraction of  hype/fashion).

Even though I can see the points made in this Forbes article, I think that the evidence cited concerns a failure to revolutionise business more broadly. What has not yet been exhausted, in my view, is the potential for gamification to help with market research.

One growing issue springs to mind as needing help. I’m thinking of the challenge faced by any client-side researcher seeking representative sample for a large quant study. The issue is falling participation rates unless research is fun, interesting and rewarding. Coupled with the risk that some ways of overcoming this by agencies risk a higher skew toward “professional” research participants.

Gaining sufficient representative sample, that matches a companies own customer base demographic or segments, can also be important for experimentation. This is timely for Financial Services companies who are seeking to experiment with behavioural economics and need sufficient participation in tests to see choices made in response to “nudges”. So, here too, is a need to freshen up research with methods of delivery that better engage the consumer.

No doubt the hype will not be realised. But I hope that as the dust settles, customer insight leaders will not give up on the idea of gamification as a research execution media. Some pioneers like Upfront Analytics and others are seeing positive results. Let’s hope others get a chance to “play” with this.

Customer Effort Score remember the NPS wars

Customer Effort ScoreMany commentators have recently debated the relative merits of Customer Effort Score (CES) verses Net Promoter Score (NPS). As a leader who remembers the controversy that surrounded NPS when it first came to dominance, the parallels are concerning. I still recall the effort wasted trying to win the battle to point out the flaws in NPS and lack of academic evidence, whilst in fact I was looking a gift horse in the mouth (I’ll explain that later). I would caution anyone currently worrying about whether or not CES is the “best metric” to remember the lessons that should have been learnt from “NPS wars”.

For those not so close to the topic of customer experience metrics, although there any many different metrics that could be used to measure the experience your customers’ receive, three dominate the industry. They are Customer Satisfaction (CSat), NPS and now CES. These are not equivalent metrics, as they measure slightly different things, but are all reporting on ratings given by customers to a single question. Satisfaction captures emotional feeling about interaction with the organisation (usually on a 5 point scale). NPS captures an attitude following that interaction, i.e. likelihood to recommend, against 0-10 scale with detractors (0-6) subtracted from promoters (9-10) to give a net score. CES returns to attitude about the interaction, but rather than asking about satisfaction it seeks to capture how much effort the customer had to put in to achieve what they wanted/needed (again on a 5 point scale).

The reality, from my experience (excuse the pun), is that none of these metrics is perfect and each has dangers of misrepresentation or simplification. I agree with Prof. Moira Clark of Henley Centre of Customer Management. When we discussed this, we agreed that ideally all three would be captured by an organisation. This is because satisfaction, likelihood-to-recommend & effort-required are different ‘lenses’ through which to study what you are getting right or wrong for your customers. However, that utopia may not be possible for all organisation, depending on volume of transactions and you capability to randomly vary metrics captured and order of asking.

But my main learning point from the ‘NPS wars’ experience over a couple of years, is the metric is not the most important thing here. As the old saying goes, “it’s what you do with it that counts”. After NPS won the war and began to be a required balanced scorecard metric for most CEOs, I learnt that this was not a defeat but rather a ‘gift horse’, as I referred to earlier. Because NPS had succeeded in capturing the imagination of CEOs, there was funding available to capture learning from this metric more robustly than was previously done for CSat. So, over a year or so, I came to really value the NPS programme we implemented. This was mainly because of its granularity (by product & touchpoint) and the “driver questions” that we captured immediately afterwards. Together these provided a richer understanding of what was good or bad in the interaction, enabled prompt response to individual customers & targeted action to implement systemic improvements.

Now we appear to be at a similar point with CES and I want to caution about being drawn into another ‘metric wars’. There are certainly things that can be improved about the way the proposed question is framed (I have found it more useful to reword and capture “how easy was it to…” or “how much effort did you need to put into…”). However, as I hope we all learned with NPS, I would encourage organisations to focus instead on how you implement any CES programme (or enhance your existing NPS programme) to maximise learning & action-ability. That is where the real value lies.

Another tip: Using learning from your existing research, including qualitative, can help frame additional questions to capture following CES. You can then use analytics to identify correlations. Having such robust regular quantitative data capture is much more valuable than being ‘right’ about your lead metric.

What’s your experience with CSat, NPS or CES? Do you share my concerns?