Whether you're using the Needs Matrix, Service Scorecard or Initiative Scorecard, you will inevitably encounter situations where people can't agree on a score for something.

You're likely to find that the question you're using the Method to answer is a variation of "what should we do next". There are likely to be some unknown unknowns.

This is perfectly natural. A general principle for working with the Method is that "evidence beats opinion".

So, if it's possible to refer to existing evidence, that's great. In most instances though, the very fact that you are working through the Method with an organisation implies that evidence might not be easily accessible (for example, in a coaching situation where you're dealing with social and emotional needs, rather than purely functional ones; or perhaps an organisation is considering entering a new market, creating a new product or service, or the people they care about have changed radically).

For the Initiative Scorecard, the range of scores is kept deliberately small in order to make decision-making simpler. Participants tend not to debate scores on the Service Scorecard much. Where there is often disagreement is in assigning relative weights to "customer" types when creating a Needs Matrix.

We've seen a number of effective approaches if there's a strong disagreement between participants:

  • let the debate run until some form of agreement is reached - this is most useful if one of your objectives as a facilitator is to learn about the organisation

  • encourage scenario planning - to move past deadlock, suggest a particular set of relative values is used in order to progress, then return to the contested element after you have a Needs Matrix and change some values to demonstrate the impact on prioritisation

  • seek evidence - agree to disagree until some form of proof can be found. Ask the participants how this particular question might be answered with evidence, then agree on a set of temporary scores in order to proceed

  • sticker voting - give people stickers (or whatever your preference is) and let participants score individually and you take the total. It's a weak option and tends to produce bland results, is riddled with bias etc, but if you're really stuck it can break a deadlock

  • overrule - use as a last resort if the CEO (or similar position) hasn't been involved in the session. Ask for their interpretation of the relative values. They'll find it very interesting that there's disagreement in their senior team

Given the breadth of industry application of the Method, we'll keep updating these materials with the experiences of practitioners.

Did this answer your question?