How can we capture and understand the true extent of impact that aid channelled through challenge funds can have on poverty reduction? This question lies at the heart of my current job as a University of Bath ‘Knowledge Transfer Partnership (KTP) Associate‘ working with the firm Triple Line Consulting on the management and evaluation of challenge funds as a way of supporting business-led innovation, job creation and poverty reduction. Last month I attended two events in Bangkok private sector development (PSD) results measurement, and this blog shares some of my reflections on them.
The two events were a training course on PSD results measurement (hosted by Hans Posthumus Consulting) followed by a global seminar on results measurement in practice hosted by the Donor Committee for Enterprise Development (DCED).
Now on version six, the DCED developed a results measurement standard, which centres on eight elements necessary to develop a successful monitoring and results measurement (MRM) system. The ‘Standard’ is becoming increasingly being seen as guiding best practice in assessing PSD activities. Programmes implementing it begin by developing a results chain, which describes the programme logic and what results it, expects to achieve. The results chain has associated indicators which form the basis of a MRM system required to monitor programme’s progress, expected to assess attribution claims and encouraged to record a programme’s wider systemic changes. The Standard is being increasingly adopted by a range of programmes and is often seen as guiding best practice results measurement in PSD. Challenge funds such the recently concluded AusAid Enterprise Challenge Fund and the African Enterprise Challenge Fund have been experimenting with the application of the Standard.
The training course: This provided participants with practical results measurement training in accordance with the guidelines of the DCED. Topic areas included: results chains, indicators, projections, measurement methods and tools, how to use results including aggregation of results, the DCED Standard compliance criteria and control points, real life case applications of the Standard and considerations in making a MRM systems function efficiently. There were20 participants from 10 countries and 14 different organisations. For the most part, participants were programme field staff hoping to implement the Standard as well as consultants.
Key learning points:
- A project is completely DCED compliant when it meets eight criteria (1- articulating the results chain, 2- defining the indicators of change, 3- measuring changes in the indicators, 4- estimating attributable changes, 5- capturing systemic changes, 6- tracking programme costs, 7- reporting the results and 8- managing the results measurement system)
- All of the above control points must be satisfied if an intervention wishes to be DCED Standard compliant apart from point number 5- capturing systemic changes. Although this control point features in the list of control points, an intervention can still be Standard compliant without measuring systemic change (if this is outside the capability or the resources of the intervention’s management).
- The results chain presents the logic of the intervention and follows the intervention activities to market trigger and uptake, enterprise performance and sector growth up to the intended overall impact of reaching target beneficiaries and reducing poverty. Each box of the results chain must have at least one indicator to measure change as well as tackle questions such as attribution and systemic change.
- The DCED Standard is useful to bear in mind when developing a stringent MRM system. It does not serve as an evaluation tool and only guides the monitoring and results measurement process.
- Once a programme has a MRM plan in place, the plan should be shared and understood by project staff as well as updated regularly to reflect any changes in the intervention.
- The business model should form the basis of the market-trigger and market-uptake steps in the results chain. Marker trigger should focus on partners’ activities. Market uptake should centre on focusing on uptake by beneficiaries (two sets of systemic change are taking place- crowding-in and copying behaviour).
- Copying can be assessed at business performance level (enterprises at the target level copying behaviours from enterprises influenced by service providers influenced by programme activities). Crowding-in should be measured at the market trigger level. This is behaviour noted in enterprises at levels other than the target level copying behaviours from those influenced by programme activities or entering an emerging market influenced by programme activities).
- A reliable MRM plan should consider the following questions before it is finalised: what other factors may influence the changes at uptake, performance, sector growth and poverty levels? How will attribution be assessed? How and when should the baselines for the various levels of the results chain be developed? What changes should be measured during the intervention? How and when will copying and crowding-in be measured? How and when will you measure crowding in? How and when will you assess the final impact?
The DCED Global Seminar
This was the second global seminar organised by the DCED and it aimed to bring together member agencies and practitioners interested in using the Standard (as well as those already using the standard), in order to share and generate new thinking on results measurement in general and to examine the Standard in particular. Participants included a mix of field practitioners, consultants and representatives of six DCED member agencies. They came from 36 countries, representing 55 organisations, field programmes and governments.SIDA, the SDC and the Austrian Development Cooperation were the international donors with representatives present. The seminar was a three-day event with four daily sessions, both with two presentations each followed by an opportunity for Q&A from the audience. These presentations covered the following range of topics:
- An overview of the DCED and results measurement;
- Good practice in designing new programmes and managing existing ones;
- Experiences from adopting the DCED standard during implementation;
- Experiences of applying the DCED standard in particular contexts;
- The audit process and evaluation and;
- Using the Progress out of Poverty Index in monitoring results.
Key learning points:
- There seems to a shared understanding of what the Standard is and what compliance entails so maybe future seminars could centre around how projects understand and capture systemic change and attribution as well as how to move forward on evaluating DCED Standard compliant projects.
- More qualitative research should be done into describing how to create “a culture of results measurement”. One of the conditionalities for being DCED Standard compliant entails all members of programme field staff becoming active in collecting results and monitoring. Many presentations mentioned that they have cultivated this culture in their programme but did not really articulate how.
- The DCED Standard is not perfect but one of its key selling points is that it encourages development practitioners to discuss best practice in results measurement, based on their experience and on the feedback of colleagues and development professionals worldwide.
- There is no standardised cost for implementing a DCED compliant MRM system but amounts mentioned varied from 10% to 20% with the explanation that logically, the cost is often dependent on the size of the programme or if the programme was well-established or a pilot programme pilot being assessed for scaling-up (ssp).
- There are a myriad of software systems in use by those implementing the standard to collect, collate and analyse results measurement data, ranging free-source, low cost online programme software to extremely costly, customised software.
- The latest version of the Reader on Results Measurement was also disseminated at the seminar and provides a very comprehensive overview of the latest thinking on the DCED standard for results measurement.
Overall, these two events taught me a lot about current practice in this field, as well as being an opportunity to make useful contacts. At the same time it was striking that nearly every session ended with debates over the same two issues that we have been grappling with anyway: how to identify systemic changes in business ecosystems, and how to attribute these and other observed changes credibly to aid funded inputs? The DCED standard helps to frame these issues clearly, and that is important. But how far it is possible and necessary to address these questions seems to depend a lot on context, and so I’m not sure how far any universal standard can help with that.