Counting the benefit; making sure the benefits count

  • 23 July 2013 | Mokibelo Ntshabaleng, Monitoring and Evaluation Specialist at Tshikululu| Insight

Since 2009, the development sector has made a notable shift in the monitoring and evaluation (M&E) practices of corporate social investment (CSI) programmes.

Trialogue reported in their 2012 CSI Handbook that the monitoring of programmes has improved, from the simple tracking of expenditure to tracking outputs and outcomes indicators and conducting site visits to funded projects.

While the attention on more and more accountability for the results of social investments increases, we are yet to see significant strides in the evaluation component of M&E.

Evaluation refers to the analysis of data gathered through monitoring in order to determine to what extent a CSI programme is effectively carrying out planned activities, fulfilling stated objectives and achieving anticipated results. Evaluation takes the raw data assembled through careful monitoring and gives them context and meaning so that we are able to make reasoned guesses as to explanations for the results.

When CSI managers, together with development champions on the ground, tackle development issues, there is every likelihood that they will not achieve results as initially envisaged due to various factors. When evaluation results are received, think of them as an opportunity to reflect on what works and what does not, and to learn about the factors contributing to both. Evaluation results are a diagnosis tool, used to keep strategies relevant and appropriate.

CSI managers must consciously manage the emotions that might be at play when evaluation results are finally available. Most importantly, it should be stressed that evaluation is not considered in terms of “success” or “failure”. All that evaluation results determine is the nature of the next round of implementation.

Perhaps the worst use of evaluation results is to “punish” those considered ineffective in implementing projects, even withdrawing funding from projects following unanticipated and disappointing results. In order to learn constructively from evaluation results, the following guidelines should be kept in mind:

  • Share evaluation results with the projects concerned, reflecting on both the effective and ineffective strategies that contributed towards the achievement or failure to meet objectives and goals

  • Consider variations in strategy that might need to be adopted to improve the “grey” areas and further strengthen areas that are successful

  • Identify the short, medium and long-term actions required to achieve the desired results

  • Identify the strengths and limitations of each partner – both the CSI manager or funder and the project – in terms of implementing the identified strategies

  • Agree on the actions required, resources, time frames and the envisaged results expected

  • Implement and continuously reflect on the revised activities to assess whether there is a change in the results now received.

Be prepared to do business unusual

Evaluation results may suggest changes in implementation strategies and funding focus areas. In these instances, the CSI funder and funded organisation should be prepared to part ways with the familiar manner of operating their programme. There will often be reluctance and anxiety regarding change, as it takes everyone out of their comfort zones, so how change is managed is critically important.

The changes to programme structure, objectives or personnel can be introduced in phases instead of overhauling the whole programme at once. Setting short, medium and long-term alterations might prove less overwhelming for those involved in implementation. The communication of proposed changes to a project is important, as is the assessment of the strengths and willingness of each partner to adapt to these changes.

By taking stock of CSI programmes through evaluation, we learn about the factors contributing to either the ineffectiveness or effectiveness of programmes, and account for the monetary investment made by relating that to the impact of the CSI programme on its intended beneficiaries. Tshikululu defines impact as “positive and negative changes produced by a developmental intervention, directly or indirectly, intended or unintended”.

Positive impact is the higher-level objective to which the outcomes of social investment programmes are expected to contribute, and suggests whether, and by how much, an intervention has influenced a community or society over time. Impact is, for example, an indication not simply of whether an intervention helped improve learner test scores, but of whether improved test scores helped those learners earn better-paying jobs, and contribute more to the economic development of their communities. Impact takes time to reveal itself, and this means that we sometimes need to be patient in understanding what our impact has been.

It goes without saying that we’d prefer social investment impact to always be positive, producing lasting change for the direct beneficiaries of a programme, and ultimately also for their families and wider communities. Our intention is that the social investment strategies that we implement maximise the positive value to both the investor and beneficiaries.

With some careful forethought and planning, we believe that any social investor can achieve meaningful, lasting and positive impact. But development is not like setting into motion a chain of cascading dominoes – we can’t expect to make a donation and sit back as a sequence of events leads irrevocably to a positive result.

Rather, Tshikululu believes that achieving impact is a cyclical process that requires funders to be proactive and engaged in the initiatives that they support, and to promote a culture of continuous learning among their development partners. This process will have some of the following characteristics:


At the outset of each programme cycle, both the funder and beneficiary organisation should undergo a thoughtful process of planning and calibration: what exactly are they trying to achieve? What are the best ways to get there? Are there other models that might work better?

Tshikululu also believes that programme plans should incorporate, or be accompanied by, monitoring and evaluation plans which define an initiative’s goals and objectives, define the indicators that will be used to measure the programme’s success, and establish “best case” targets for what can be achieved.

Tshikululu recommends applying the “SMART” principle when developing objectives, selecting indicators and establishing targets – i.e. these should be specific, measurable, achievable, realistic and time-bound.


As a beneficiary organisation implements a programme, it should also be monitoring its work and collecting data that documents its performance.
While this information will be reported to the funder on a periodic basis, the results should have immediate value to the implementing organisation as a management tool to track the pace and progress of the programme’s activities.


At the conclusion of each programme cycle – and at the end of the programme – Tshikululu encourages all development organisations to engage in a formalised, self-critical assessment of their work. Assessments should compare outputs and outcomes against the targets that were set at the outset, as well as highlight the strengths and weaknesses of the programme and identify areas for improvement going forward.

For longer-term initiatives, funders and beneficiary organisations should undertake rigorous, structured programme evaluations at regular, pre-determined intervals – typically every two to three years. Often implemented by an independent third party, evaluations seek to contextualise performance, identify and understand impact, and recommend corrective action where necessary.


At the close of the programme cycle, both funder and recipient should document their achievements and lessons learned as a basis for improving implementation going forward.

Great reports are as forward-looking as they are retrospective, capturing key lessons and making recommendations to improve the overall programme.

An effective report should be self-critical (how did programme performance compare to programme goals?), inclusive (what do relevant constituents and beneficiaries say about the programme?), diagnostic (what inhibited programme performance, and what can be done to improve it?), and prescriptive (what changes or inputs are necessary to improve the programme?).

Reports are often treated as an end point: the end of a programme year, the fulfilment of a grant obligation, and the close of a relationship. In the case of once-off grants, this attitude may be appropriate. But generally, Tshikululu believes that multi-year, programmatic relationships demand a different kind of report.

In geometric terms, Tshikululu believes that a report should not serve as an end point, but as a node – a point at which direction changes. It is a step in a multi-year cycle of project planning, implementation, assessment and review – and, ultimately, in a process of revision and renewed planning.

The purpose of evaluation should not be to bring unnecessary or undue attention to programme shortcomings. Rather, its purpose should be to inform and improve programme planning, with the goal of improving the effectiveness and maximising the development impact of the initiative in the future.

blog comments powered by Disqus