Several efforts have been made in recent years to survey the landscape of adaptation M&E approaches around the world and distill lessons and guiding principles.
Along with the UKCIP/SEA Change review discussed above (Bours et al. 2013), other useful reviews include:
Monitoring and Evaluation for Adaptation: Lessons from Development Cooperation Agencies, from the OECD (Lamhauge et al. 2012), analyses the treatment of M&E in 106 adaptation project documents across six bilateral development agencies. It finds that results-based management, and logical frameworks are the most common approaches used for adaptation. The analysis stresses the importance of clearly differentiating between outcomes, outputs and activities, and combining qualitative, quantitative and binary indicators. It also notes that the baselines for these indicators should include the effects of future climate change, particularly for projects with long-term implications, but acknowledges that significant challenges in setting those baselines and in attributing longer-term outcomes to interventions.
Making Adaptation Count, by GIZ and the World Resources Institute (Spearman and McGray 2011), provides an overview of M&E for adaptation, drawing links to results-based management and the Aid Effectiveness Agenda (OECD 2005). It then reviews early efforts at adaptation M&E and draws lessons about the highly contextual nature of adaptation, the value of diversity in evaluating adaptation, and the need to explicitly state, at the outset, the assumptions being made about future conditions. Spearman and McGray also identify three principles of effective adaptation M&E systems: design for learning; manage for results; and maintain flexibility in the face of uncertainty.
The UNFCCC Subsidiary Body for Scientific and Technological Advice (SBSTA) reviewed Parties' submissions about adaptation M&E best practices as well as an array of project documents and other sources. The synthesis report (UNFCCC 2010) identifies distinct roles for monitoring - to enable planners and practitioners to improve adaptation efforts by adjusting processes and targets - and evaluation - a process for systematically and objectively determining the effectiveness of an adaptation measure in the light of its objectives. It also distinguishes two key elements of assessing effectiveness: 1) have the objectives and targets been achieved, and 2) can this be attributed to the measure taken?
The SBSTA review also provides a fairly detailed overview of the progress to date in applying M&E frameworks to adaptation in different countries, including the kinds of indicators that are being used - with a detailed comparison of the UK and Finland - as well as programme- and project-level applications of M&E under different funders.
One notable finding is how expensive a thorough M&E system can be: for example, the M&E budget of the four year Pacific Adaptation to Climate Change project implemented by the United Nations Development Programme and the Secretariat of the Pacific Regional Environment Programme is $410,000 USD. Given that such costs would be prohibitive for many community-based adaptation projects, UNDP has developed a simplified tool to monitor and evaluate locally driven adaptation projects. Monitoring Adaptation to Enhance Food Security, from the CGIAR Research Program on Climate Change, Agriculture and Food Security (Chesterman and Ericksen 2013), explores how food security outcomes are being addressed in adaptation M&E.
It finds that most available documents only outline frameworks, but do not report specific experiences, which makes it difficult to summarize best practices or identify the most reliable indicators to use. It offers six recommendations:
This section is based on the UNEP PROVIA guidance document |
1. | You want to monitor and evaluate implemented adaptation actions. | |
2. | The purpose of the evaluation is clear. | |
3. | As a next step you are faced with the question whether the underlying principles and evaluation criteria have been established. |