Measuring NFP Measures

Arguably, next to your non-profit board’s responsibility for setting directions for your organisation, the monitoring and evaluation of performance and conformance are the two next most significant board roles and activities.

Central to these monitoring and evaluation activities is the use of metrics and indicators (see Locus Focus Vs Hocus Pocus for distinctions between these terms), and this is reflected in the MELD governance model which has been the subject of previous posts (see links below).

The title of this post uses both noun and verb forms of the word ‘measure’, a homograph (a word with the same spelling and pronunciation) that has the following meanings:

Measure (verb) – assess the importance, effect, or value of something; judge someone or something by comparison with a certain standard; measuring, assessing
Measure (noun) – a plan or course of action to achieve a particular purpose

Hence our title can be read to mean ‘assessing the importance, effect, or value of plans and courses of action related to our organisational purposes‘.

Receiving and considering benchmark and status reports, analysis and advice is a feature of every board meeting. Depending on the history of your organisation, the focus, format, and validity of the measures you are using for evaluation may not have been reviewed and affirmed for some time. Indeed, while you may use a standard set of metrics and indicators, the formulation of a Monitoring and Evaluation Framework may not have been addressed.

Measuring in the MELD Model

In the MELD Reflective Governance Model, measuring is the first key element.

We measure when we:

  • assess stakeholder and business needs which define our purposes and focus of activity
  • consider the potential impact of changing economic, political, legal, technological, evironmental and social circumstances, on the organisation
  • assess performance and compliance
  • check on our key metrics – solvency, growth, compliance – usually via a dashboard customised for our business model and organisational type
  • use performance indicators to take stock of how well our strategic initiatives are achieving, or have achieved, the goals and objectives we specified
  • assess risk severity and likelihood
  • complete root cause analyses as part of an incident response process
  • select new staff, contractors and suppliers
  • review the performance of staff – both in terms of their satisfying requirements of their key accountabilities (ongoing), and also the achievement of key performance indicators, usually aligned with current strategic targets (annual)
  • review the achievement of service delivery standards by third parties
  • use management reports to monitor workflows, business processes, network status, and client engagement levels
  • and so on …

Selecting the right measuring tool

Depending on the nature of the matter or activity to be measured, different types of measuring techniques and methods will be required. Given the broad international and cross-sectoral audience for these posts, the very ‘high-level’ categories of key metrics and performance indicators have been used. There are, of course, much more specific types of measuring instruments and activities that could be identified by specialists (and governing boards) in every field.

Many non-profits also require qualitative methods alongside their quantitative measuring methods. Sometimes the distinctions between these methods can become blurred, especially where subjective opinions are converted into numeric tables and graphs. The chart below may help you to distinguish between methods suitable for addressing different governance purposes.

Measuring and Evaluation – inseparable partners

Measuring occurs all the time, but it does not occur in isolation of course, as it is essentially an enabler of evaluation, the next element in the MELD model. Many organisations have recognised this integral relationship by adopting a monitoring and evaluation framework.

For some focal areas, the measuring and evaluation occur simultaneously. The instant some finite benchmark has been exceeded (or not reached in the case of baselines), the automatic evaluation of appropriate response is triggered. Think smoke detectors setting off alarms, notifying first responders, and initiating evacuation procedures.

For other focal areas, the trend line towards a trigger for board or management intervention can develop more slowly over time. Usually, the board will have a sense of where metrics and indicators sit in the green (acceptable), amber (take precautions), or red (remedy) zones, because it has defined risk tolerance and appetite levels, and established escalation thresholds, requiring management reporting and recommended actions as those thresholds are approached or breached.

Future posts

A Monitoring and Evaluation (M&E) Framework establishes the ‘logic’ of the inputs, activities, indicators, targets, outcomes, and impacts of your strategic and operational work. Some of the key considerations in the development and use of such a framework will be explored in future posts.

Other posts will explore aspects of the themes highlighted in the header image above:
M1 – Key Metrics
M2 – Performance Indicators
M3 – Data Quality

See also:

Leave a Reply