Four things we have learned about national evaluation policies in Africa

Caitlin Blaser-Mapitsa, Takunda Chirau and Matodzi Amisi

National evaluation policies are one way of demonstrating a willingness in government to promote the use of evidence in a systemic way. Our recently published Evidence & Policy article, ‘Policies for evidence: a comparative analysis of Africa’s national evaluation policy landscape‘, explores the relationship between evaluation policies and evaluation systems. We have found that policies are one piece of the puzzle acting to strengthen undertaking of evaluations, evidence use, and build evaluation practice in Africa.

Several countries in Africa are in the process of developing national evaluation policies, with Benin, Zimbabwe, South Africa, Nigeria, and Uganda having formally adopted a policy, and Kenya, Ghana, and others having draft policies in the approval pipeline. A question worth asking is whether this trend represents a real cultural shift towards openness greater government openness, learning and accountability. Are governments developing and adopting policies because of a growing recognition of the importance of evaluation as an important source of evidence. Here four things we have learned by looking at the national policies that are being introduced:

  1. Political leadership and bureaucratic competence are both critical to creating the space for evaluation practice to grow within government. Supporting both is an iterative process. Some governments are trying to use evaluation policies to nudge countries in the direction of strengthening evidence use. In other countries, the evaluation system is strongly in place, with a culture of evaluation practice, but a policy needs to follow to create greater coherence and organisation among various stakeholders in the evaluation system. For example, in South Africa several departments had an established evaluation culture before the adoption of the National Evaluation Policy Framework. What the policy did was to bring disparate activities under one system therefore creating shared understanding of evaluation, evaluation methods and systematic way to build evaluation capacity within government.
  2. In governments where evaluation is unfamiliar, or viewed as threatening, a huge amount of preparatory work is needed to ‘prove the concept’ of evaluation. It might be better to not move to quickly to policy, but to focus on implementing evaluations with ministries or programme managers that are open to the idea. Learning by doing, will allow public sector institutions to see how evaluation can be useful in demonstrating programmatic effectiveness and generating feedback for adaptation.
  3. Widespread, horizontal and vertical participation in the wider evaluation ecosystem, which includes evaluative activities beyond the government system, is critical to help evaluation policies have outlets for implementation. This includes participation from broad swathes of a programme or department, and also evaluation capacity developers, political leadership, civil society, and others who play a role in making change happen in a wider landscape of stakeholders, whether it is through accountability, technical expertise or direct implementation.
  4. Sharing information about evaluation systems between governments is critical, but copying structures from another country context does not move the process forward. With different departments or even governments sharing lessons learned, new ideas are often catalysed. However, if these are transplanted without being sufficiently contextualised, they are not as likely to be successful as if they are thoroughly internalised, and made relevant to a specific organisational context.

Evaluation policy development holds considerable promise in providing space to allocate resources, focus attention, strengthen capacity (at individual and institutional levels), and create coherence to sift through the complexity of all the components of a national evaluation system. However, leading by policy is unlikely to generate good will for those who do not see the purpose and value of evaluation. Policy development must be strategically combined with advocacy, broad participation, and meaningful, collaborative practice for the evaluation space on the continent to grow.


Caitlin Blaser-Mapitsa is a Senior Lecturer in Monitoring and Evaluation at the Wits School of Governance. She is interested in how evaluation systems can shape equity.

Takunda Chirau is a Senior Technical Monitoring and Evaluation Specialist at the Centre for Learning on Evaluation and Results at Wits University. His role is to establish and nurture partnerships for strengthening M&E capacity in English speaking countries.

Matodzi Amisi is a research associate at CLEAR-AA and Institute of Security Studies with keen interest in the use of evaluative evidence in policy and implementation.


You can read the original research in Evidence & Policy:

Chew, S. Armstrong; Chirau, Takunda J.; Blaser-Mapitsa, Caitlin and Amisi, Matodzi M. (2021). Policies for evidence: a comparative analysis of Africa’s national evaluation policy landscape. Evidence & Policy, DOI: 10.1332/174426421X16104826256918.


Image credit: Photo by Twende Mbele


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s