Results-based Monitoring and Evaluation Framework
Results-Based Monitoring and Evaluation will form an integral part of the management of this programme as a way of enhancing efficiency and effectiveness. Information derived from this process will enable the fine-tuning of the entire programme, with the aim of establishing the extent to which the envisaged activities are met within the set timeframes, using allocated resources. Results-Based Monitoring & Evaluation will be useful in helping identify challenges and emerging trends in Drought Disaster Resilience and Sustainability Initiative of the IGAD region. The RBM&E Framework assumes periodic analysis and reporting, which will give member states and stakeholders time to assess results and initiate action where necessary.
Programme monitoring will provide a mechanism for the early indication of progress or lack thereof, in the attainment of results. Focused at the level of programme outputs by project (as outlined in the programme logical framework and implementation plan), it will assess efficiency, execution, compliance with procedures and seek to reveal “what happened”, what is working and what is not working and why. Effective monitoring will enable an assessment of programme execution performance by project against parameters defined in the baseline programme plan by project, thus enabling corrective actions, where necessary. The programme will adopt gender sensitive monitoring practices throughout the entire programme life cycle and will consolidate resultant programme monitoring information by specific project. Effective programme implementation, monitoring and reporting will yield the required quality standards, and avoid cost and time schedule overruns.
The programme will be monitored at three levels:
Compliance monitoring: This will assess compliance with the set policies, procedures and standards in executing programme activities in the key areas of intervention;
Performance monitoring: This will measure progress in activity completion against the set resources, time frames and plans towards desired results.
Economic and Value For Money (VFM) data collection: This will ensure that the IDRRSI implementers collect accurate and complete sex dis-aggregated data before, during and after an intervention
This will involve the application of rigorous methods to assess the extent to which the programme has achieved its defined impact objectives. It will attempt to determine as systematically and objectively as possible, the relevance, effectiveness, efficiency and impact (both intentional and unintentional) of the programme in the context of its stated objectives. Focused at the level of outcomes and results of the intervention, it will ask the question “why did it happen or not”. In evaluating the project the issue of causality will be examined, that is, the causal relationships between outputs, purpose and goal.
The programme will be evaluated based on the project logical framework and the evaluation should be carried at three levels:
- Efficiency testing: This will ascertain whether the programme is on course to attain the intended goals, and if not, establish why.
- Impact testing: This will establish the project’s effect against pre-determined gender sensitive indicators on the intended beneficiaries/stakeholders
- Economic and Value For Money analysis: This will establish the cost of delivery and value of the benefits accruing to the intended target communities.
- External Mid-Term Review
There will be a programme mid-term evaluation to assess the programme’s progress in attaining set objectives at its mid-point, thus providing an opportunity to review strategies and outputs.
At the end of each of the three 5-year programming cycles of the Strategic Plan, an end-term evaluation will be conducted in the period following programme completion. It is at this point that the expected impact shall be measured. Its purpose would be to study the programme’s impact using defined gender sensitive performance indicators, and to draw conclusions for similar interventions in the future (lessons learned).
Feedback will consist of findings, conclusions, recommendations and lessons learned from programme implementation experience. This feedback will be used to improve performance, inform relevant policy formulation and decision-making, and the promotion of learning culture within the organization.
Information Sharing, Learning and Knowledge Generation
Evaluative knowledge distilled from lessons learned from the results-based monitoring and evaluation process will be documented and used as evidence-based good practices and promising technologies that illustrate why and how different strategies and approaches work under specific contexts. This invaluable information will be disseminated among stakeholders and academia partners at suitable forums.
Outcome Monitoring and Evaluation
The programme will adopt a systematic process of collecting and analyzing data to measure programme performance by project. Outputs will be tracked and their contributions to outcomes measured by assessing the change from baseline sex disaggregated conditions to desired outcomes. Baseline data will be established, performance outcome indicators selected and mechanisms such as field visits, stakeholder meetings, qualitative and quantitative data collection, analysis and reports done. This method and approach will enable the extraction of information related to the progress made towards the outcome, factors contributing to the outcome and the programme’s contribution to the same. An assessment of performance through analysis and comparison of indicators over time will be undertaken.
Periodic Progress Reporting
Reporting will be an integral part of results-based monitoring and evaluation process and will involve the systematic and timely collation and provision of essential information at periodic intervals. Quarterly updates; bi annual and annual reports will be produced. The quarterly updates will briefly overview key project.