Monitoring and evaluation: what’s the difference?

We may be familiar with the terms monitoring and evaluation but what's the difference? Many small charities and community organisations collect and use monitoring information to report to funders, but may be missing out on evaluation.

In this guest blog, programme director Angela Schlenkhoff-Hus at Coalition for Efficiency explores the differences between monitoring and evaluation through a small charity case study.

And suggests tips for better practice.

 

The world of impact measurement contains a lot of terminology, making it hard to know what to do when terms aren’t well defined or understood.

The difference between ‘monitoring’ and ‘evaluation’ is one such example.

 

So what’s the difference between monitoring and evaluation?

Inspiring Impact defines ‘monitoring’ as the information your organisation collects routinely through delivering your services - for example application forms, registers, client management systems”.

Whereas evaluation is “using information from monitoring and elsewhere to judge the performance of an organisation or project”.

In tandem, they are essential management tools to track progress, ensure projects are having the desired effect and to enable organisations to make decisions on present and future delivery of activities.

Monitoring is the systematic, routine and ongoing collection of delivery data.

Evaluation is the periodic (e.g. quarterly, half-yearly, annual) review of the performance of a project or organisation, including the monitoring data.

Monitoring measures whether things are on track in terms of targets; evaluation tells you how your project or organisation is performing against its aims and objectives.

Monitoring provides information on the current status of a project and therefore enables an organisation to make ‘in-flight’ adjustments if things aren’t going according to plan.

Evaluation provides lessons learned and recommendations for improvements and growth.

Usually, monitoring focuses on quantitative data (the numbers) relating to inputs, activities and outputs.

Evaluation can combine quantitative and qualitative (the stories) data to shine a light on outcomes and impact.

Monitoring looks at the details of delivery, whereas evaluation focuses on the bigger picture.

The world of impact measurement contains a lot of terminology, making it hard to know what to do when terms aren’t well defined or understood. The difference between ‘monitoring’ and ‘evaluation’ is one such example.

 

Monitoring

Evaluation

Systematic, routine and on-going collection of delivery data

Periodic review of performance of a project or organisation

Measures whether things are on track and meeting targets

Tells you how your organisation is performing against its aims and objectives

Provides information on the current status of a project -> ‘in-flight’ adjustments

Lessons learned and recommendations for improvements and growth

Usually focuses on quantitative data

Can combine quantitative and qualitative data

Focuses on details of delivery

Focuses on the bigger picture

 

 

An example from an employment charity

This hypothetical charity supports adults in Camden who are experiencing long-term unemployment through a series of weekly workshops covering subjects such as job searching, CV writing and job interviews.

Its staff support 20 clients at any one time.

 

How the charity uses monitoring data

The charity collects the following monitoring data through its application form:

  • demographic information (e.g. age, gender, home post code)
  • where they have been referred from
  • educational attainment and qualifications
  • previous work experience and employment history
  • type of job clients are searching for.

The charity also gathers data on clients’ engagement with the programme by monitoring weekly attendance.

What the charity draws from this data is whether it is:

  • reaching the right audience for its programme (e.g. Camden-based clients, long-term unemployed)
  • attracting a range of clients or whether some are struggling to access its services (e.g. due to childcare challenges)
  • able to retain clients for the life span of the programme or whether some clients are dropping out

It also can see if its referral partnerships are efficient.

 

How the charity evaluates its programme 

To help the charity evaluate its programme, staff are collecting data at the start of the programme using a combination of a questionnaire and an informal conversation with clients.

They gather quantitative and qualitative data on:

  • challenges clients face
  • their hopes and expectations for the programme
  • how they score themselves e.g. their confidence levels, their outlook on their future, a range of employability and transferrable skills.

At the end of the programme, clients are requested to complete a questionnaire asking them to score themselves against the same statements.

They are also asked to provide feedback on whether their expectations were met by the programme.

In the last session, staff also include a brief exercise called ‘Stop, Start, Continue’, asking all clients to write on post it notes what they are going to stop doing, start doing or continue doing as a result of what they learned on the programme.

This data helps the charity evaluate the performance of the programme in relation to its mission as well as the impact on its clients in at least the three following ways:

  1. Comparing the scoring against statements about their skills and current mood at the beginning and the end gives staff an idea whether the programme worked for the clients and what change it has achieved
  2. It also helps staff understand what changes clients are planning on making through the Stop, Start, Continue exercise beyond the lifespan of the programme
  3. Clients’ responses on their expectations of the programme helps the charity evaluate whether its approach is working for the clients it is supporting.

Additional feedback data

Alongside this monitoring and evaluation data, staff also collect informal feedback from the clients about how they find individual workshop sessions, the workshop delivery style, the handouts, the group dynamic and the programme overall.

This together with a suggestions and comments box in the office of the charity where the workshops are held provides the charity with valuable information to make changes as the programme is delivered.

 

What is good monitoring and evaluation practice?

At Coalition for Efficiency, we have identified a few key ingredient to building a robust monitoring and evaluation practice:

A Theory of Change: 

This is a tool to help you understand what change looks like for your organisation – which then feeds into your monitoring and evaluation framework. To get started on developing a theory of change, have a look at this blog.

A learning culture:

Planning how your organisation is going to learn from monitoring and evaluation is vital as is being receptive to change as an organisation.

An open learning culture in which staff and volunteers feel they can make suggestions and where a project’s poor(er) performance isn’t seen as personal failure is crucial for learning and moving forward.

Reliable data collection methods:

There needs to be a clear plan on what kind of data is going to be collected, using what tools, when, how often and by whom – and it requires consistent implementation.

This should also cover how you are going to use the information and how you are going to report back, internally and externally.

Using measurement for management:

Are you making use of data to drive decisions within your organisation?

Strong leadership:

This doesn’t refer to a top-down approach, on the contrary.

Strong leadership is needed to ensure all parts of an organisation are involved in the development and delivery of monitoring and evaluation practices and that stakeholders are also included.

Leadership plays an enabling role, overseeing the process.

Strong board governance:

The central role of the board is to ensure the organisation is achieving its social purpose.

Regular review of monitoring and evaluation data enables trustees to carry out this role with more confidence.

 

Are we learning to improve?

Increasingly, monitoring and evaluation, M & E, becomes MEL, i.e. monitoring, evaluation and learning.

While many organisations, when they approach us for support, want to develop or improve their monitoring and evaluation in order to demonstrate the effectiveness of their approach, e.g. in order to attract more funding or raise awareness, learning is a fundamental function of monitoring and evaluation.

Following a hunch without proper analysis of what is actually happening on the ground is risky and can lead to the wrong decisions being made.

Learning is the process through which information gathered through monitoring and evaluation is reviewed and then intentionally used to continuously improve and adapt a project’s or organisation’s ability to achieve its aims and objectives.

If this learning is also shared externally, it enables the development of best practice and multiplies the impact for beneficiaries.

 

Need some help with monitoring and evaluation?

If you would like to talk to us about how CfE might be able to support your organisation with your monitoring and evaluation practice, please send an email to angela@cfefficiency.org.uk

 

Recommended resources

Have a look at Inspiring Impact’s blog on developing your impact practice.

Or New Philanthropy Capital’s blog on five types of data. Really useful to understand the data you are likely to be collecting and what this means for your everyday practice.

 

Follow your hunch

Got a hunch about why something is or isn't working but need some help exploring your data? Submit your challenge to us today.