Research Article | | Peer-Reviewed

Institutionalizing Evaluation as a Governance Capability: Evidence from Agriculture and Economic Policy in Africa

Received: 25 June 2025     Accepted: 11 July 2025     Published: 7 January 2026
Views:       Downloads:
Abstract

Public policy evaluation has gained renewed significance as African governments seek to reinforce accountability and improve development outcomes. The institutionalization of evaluation reflects a strategic effort to embed oversight within governance systems and respond to increasing demands for evidence use in Africa. The research examined how 28 countries formalized evaluation functions through legal instruments, administrative procedures, and organizational practices between 2010 and 2024. It focused on agriculture and economic policy, given their role in advancing structural transformation and governance reform. A structured documentary review applied a multidimensional framework grounded in institutional theory and political economy. Four core dimensions informed the analysis: legal mandates, normative alignment, cognitive uptake, and hybrid arrangements. The review covered 306 official documents, including development strategies, budget frameworks, and statutory texts drawn from planning and finance ministries, sectoral agencies, and recognized international repositories. Results revealed divergent national pathways. Some countries established evaluation systems anchored in statutory authority and integrated within planning or budgeting processes. Others relied on frameworks that lacked enforceable mandates or sustained institutional support, often shaped by external interventions. Regional patterns also emerged. Anglophone and Island States more frequently demonstrated operational alignment between evaluation and resource allocation. Francophone and Central African countries often emphasized legal form without consistent implementation. Hybrid systems appeared where normative intent coexisted with partial adherence or tactical resistance. The typology developed through the research identified embedded models with institutional depth, transitional frameworks with uneven alignment, and symbolic systems with limited operational traction. Sectoral integration and political sponsorship consistently acted as enabling conditions. Evaluation systems reinforced state capability when embedded within governance functions and aligned with domestic policy processes. African experiences challenge linear conceptions of evaluation development and reveal adaptive trajectories rooted in national priorities and evolving administrative contexts. The research contributes to a deeper understanding of evaluation institutionalization as a dynamic process shaped through interaction between state capacity, governance reform, and evidence use in Africa.

Published in Journal of Public Policy and Administration (Volume 10, Issue 1)
DOI 10.11648/j.jppa.20261001.11
Page(s) 1-17
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2026. Published by Science Publishing Group

Keywords

Evaluation Institutionalization, Governance Reform, Agriculture and Economic Policy, State Capability, Evidence Use in Africa

1. Introduction
Public policy evaluation increasingly plays a strategic role in governance across Africa. It clarifies national priorities and informs how governments allocate scarce resources. Evaluation also strengthens accountability through structured performance signals. Many countries have adopted formal frameworks; implementation often falls short of transforming these commitments into functional systems. Institutional gaps persist across ministries and sectors. The research analyzes how 28 African states have institutionalized evaluation in agriculture and economic growth. It examines how institutional authority and structural arrangements shape the use of evidence.
The evolution of evaluation systems across Africa signals a broader shift toward more credible governance. Institutional reforms increasingly respond to performance deficits and rising expectations for public accountability. In many contexts, distorted power relations and external influence complicate domestic reform efforts. North (1990) emphasized that institutions operate not through formal rules alone but through the incentives and behavioral norms that align actors . Legal reforms alone do not ensure institutional functionality. Institutionalization depends on embedding evaluation in authoritative practices that shape decisions and sustain reform. Peters (2019) notes that evaluation only gains legitimacy when integrated into routine governance and receives consistent political endorsement .
Agriculture and economic growth are central to Africa’s development. Agriculture remains a primary source of livelihoods, but structural weaknesses in infrastructure and policy constrain productivity and limit competitiveness. Barrett et al. (2017) emphasize that gains remain elusive when policy neglects linkages beyond production . Effective evaluation can support this objective through stronger sectoral coordination and institutional alignment.
Economic growth sectors such as roads, electricity, and manufacturing, do not merely complement agriculture. They shape the motivations behind capital allocation and determine how goods move from origin to end users. In contexts of institutional fragmentation, poor coordination among these sectors weakens policy coherence and diffuses accountability. Evaluation reduces systemic risks when integrated into planning systems with mechanisms that reinforce oversight and foster adaptive reform. Its strength lies not in compliance reports but in evidence that guides public action.
The African experience with evaluation reveals both institutional potential and structural limitations. In Rwanda, the Imihigo performance system institutionalizes local accountability through structured result tracking. In South Africa, Cabinet-level decisions reflect findings from evaluations anchored in the Department of Planning, Monitoring and Evaluation . In South Africa, sector evaluations inform Cabinet deliberations through mechanisms anchored in the Department of Planning, Monitoring and Evaluation, especially in agriculture and agro-processing . Elsewhere, evaluation remains externally driven and institutionally marginal, with limited influence on fiscal or policy decisions.
The emergence of state-led evaluation systems rests on foundational insights from institutional and political theory. Institutions provide rule structures that shape behavior through enforcement and incentive alignment, a view central to North’s framework . Reform rarely advances without a convergence of bureaucratic capacity and political strategy, as noted in Grindle’s (1997) work . Evidence from Peters (2019) highlights how institutional performance strengthens when coordination replaces fragmentation and rule adherence becomes routine . Together, these perspectives illuminate the institutional logic behind evaluation effectiveness.
This research builds on the work of Andrews et al. (2017), who propose problem-driven iterative adaptation as a viable reform strategy . Their approach emphasizes feedback use and permits incremental adjustment. Institutionalized evaluation fosters environments that support adaptive governance through structured learning processes and iterative policy adjustment. Picciotto (2013) demonstrates that evaluation supports effective governance when credibility takes precedence over form. Institutionalized evaluation enables learning and policy innovation when embedded in authoritative frameworks .
The focus on agriculture and economic growth reflects empirical relevance and theoretical depth. These sectors engage key public institutions, shape economic outcomes, and influence poverty dynamics. Their complexity requires coordinated action across institutions. They also reveal whether evaluation systems align with national priorities and guide strategic investment. Issues such as land policy or public infrastructure provide concrete cases to assess institutional capacity.
Previous studies often focus on national systems or donor requirements. Limited research has explored how evaluation becomes embedded in state systems while shaping sectoral agendas and navigating institutional complexity. This research addresses that gap. It approaches evaluation not as a procedural function but as a core governance capability. The sectoral lens allows for a more granular analysis of institutional behavior, actor incentives, and system-level coherence. Evaluation becomes not simply an instrument of oversight, but a driver of institutional learning and policy responsiveness.
The scientific contribution of this research lies in its effort to connect evaluation systems to foundational debates about the state and development effectiveness. Evaluation is often described in technical terms focused on indicators or procedures. Yet its institutionalization involves deeper concerns, including the exercise of authority and the capacity of administrations to organize decisions coherently. Drawing from comparative public administration, the analysis explores how variation in institutional design and political sponsorship shapes evaluation uptake. Andrews (2013) offers critical insights into the challenges of embedding reforms that move beyond symbolic compliance .
This research addresses critiques of evaluation systems that replicate formal models without achieving substantive reform. DiMaggio and Powell (1983) exposed how isomorphic mimicry produces structures that appear aligned with international standards but lack contextual effectiveness . Evaluation, when detached from sector realities or political incentives, fails to support genuine transformation. The analysis explores how variation in institutional design interacts with administrative capacity. It also examines how the strength of political support influences the functionality of evaluation systems within complex governance environments.
The institutionalization of evaluation in Africa signals the extent to which states exercise authority through informed decisions and consistent policy execution. Where evaluation guides priorities and influences choices, governments reveal strategic depth and administrative maturity. Systems that lack consistent use reflect symbolic intent without operational substance. As Ba (2021) argues, evaluation system’s effectiveness depends on how institutional purpose aligns with the use of results and contributes meaningfully to public value . Evaluation must function as a core governance capability.
The purpose of this research is to generate both empirical insights and conceptual clarity. It draws on evidence from 28 African countries to explore how institutional structures shape the application of evaluation in agriculture and economic growth. The analysis also considers the role of sector alignment and state capacity in determining uptake. Beyond empirical findings, the research contributes to a deeper theoretical understanding of how evaluation systems operate within the specific constraints and opportunities of African governance. The findings offer critical insights for policymakers, institutional leaders, and scholars aiming to strengthen evaluation as a developmental lever in Africa. They underscore the importance of embedding evaluation within the state’s governance architecture to support coherent decision-making and sustained reform.
2. Research Objective and Questions
This research examines how African states institutionalized public policy evaluation between 2010 and 2024, with particular focus on agriculture and economic growth sectors. It addresses a persistent gap in comparative public administration through an approach that treats evaluation as a core governance function, embedded within the strategic operations of the state. The inquiry investigates how national systems have evolved to generate, validate, and apply evaluative knowledge through structured institutional frameworks.
The selected timeframe marks a critical inflection point in the evolution of evaluation systems across Africa. Several countries began consolidating fragmented practices into formalized structures supported by national policies and oversight mechanisms. This shift signals an emerging institutional commitment to state-led learning and evidence-informed governance. Global frameworks influenced this momentum. The 2011 Busan Partnership advanced the principle of nationally driven results frameworks, while Agenda 2063 positioned evaluation as an instrument of credibility and transformation rooted in sovereign policy processes.
Governments also faced major governance disruptions during this period. The COVID-19 pandemic exposed weaknesses in public systems and increased demand for timely, credible evidence. States with stronger evaluative capacity adapted more quickly and revised policies in response to complex shocks. The context enables a rigorous examination of how evaluation contributes to institutional learning and enhances the credibility of public decision-making.
Institutionalization requires anchoring evaluation in enforceable legal provisions while ensuring its integration into administrative systems and alignment with core governance priorities. It acquires depth when evaluation becomes routine in governance and grounded in enforceable norms that uphold autonomy and accountability . This research examines how institutionalized evaluation may influence policy orientation. It highlights how evaluation can strengthen institutional authority and reshape coordination dynamics across governance levels. When embedded effectively, evaluation operates as a strategic function of the state rather than a procedural formality.
Three questions guide the research:
1) What legal and institutional frameworks have governments adopted, and how do these vary?
2) How have evaluation systems shaped coordination and decision-making in agriculture and economic growth?
3) What patterns of institutionalization reveal broader state capabilities and policy learning?
The research distinguishes between systems where evaluation is functionally integrated and those where it remains largely symbolic. It explores how evaluation reflects institutional legitimacy and signals the state’s evolving role in shaping and executing development priorities.
3. Literature Review
Institutionalizing public policy evaluation in Africa requires legal foundations backed by political authority and embedded within functioning administrative systems. This research applies a composite framework that draws from institutional theory, the logic of public reform, and cross-national administrative analysis. These lenses clarify the pathways through which evaluation gains legitimacy and becomes a functional part of statecraft. Figure 1 outlines the analytical foundation supporting this inquiry.
Figure 1. Multidimensional Framework for institutionalizing Public Policy Evaluation – Authors.
Scott’s (2001) institutional theory identifies three foundational dimensions: the regulative, the normative, and the cognitive pillars. Legal mandates establish the formal basis for evaluation . Still, a legal framework alone does not ensure institutionalization. Evaluation systems acquire depth when internal norms and professional standards reinforce their legitimacy within public institutions. When public officials perceive evaluation as a central function of governance rather than a compliance requirement, the system gains operational relevance. In contrast, systems based solely on legal codes without corresponding values or internal commitment rarely shape policy direction.
Political incentives frame the operational space within which evaluation influences public choices. Grindle (1997) shows that reform often reflects calculations of political risk . Stated support for evaluation may conceal reluctance to embrace its full implications. Andrews (2013) points out that institutionalization collapses when political dynamics block reform momentum . Effective evaluation systems take root when reformers create coalitions, manage vested interests, and integrate evaluation into state structures that support credible decision-making.
Comparative public administration offers an additional perspective. Painter and Peters (2010) emphasize how institutional context shapes the response of public systems to reform efforts . African administrations frequently operate within hybrid environments. In such contexts, formal rules intersect with informal routines. Bierschenk and de Sardan (2014) demonstrate how informal authority, discretionary enforcement, and political networks often condition whether evidence influences outcomes . Institutional design, when detached from daily administrative realities, provides only a partial explanation for system performance.
This research frames evaluation as a relational function that reflects how institutions function rather than as a standalone procedure. It explores the interactions among legal frameworks, bureaucratic cultures, and political incentives. Rather than relying on rigid indicators, the framework emphasizes coherence across systems and the behavioral integration of evaluation into core governance functions.
The analysis focuses on five core domains: legal mandates that create the formal foundation; structures of administrative authority; standards that define practice quality; mechanisms that channel evidence into decisions; and political contexts that shape use. These domains reflect how theory and practice converge. Consistent with the African Evaluation Association (AfrEA) and the Organisation for Economic Co-operation and Development (OECD), this research positions evaluation as a capability that strengthens governance through reform, institutional learning, and public-sector responsiveness.
Academic interest in public policy evaluation in Africa intensified after the early 2000s, as scholars began to examine how governance arrangements, evidence use, and institutional configurations interact. This emerging body of work coincides with a global shift that recognizes evaluation as a core instrument of state legitimacy and strategic learning. However, the literature remains fragmented. Many contributions remain descriptive, with limited conceptual clarity or empirical depth. They often document reforms but fail to analyze how evaluation becomes anchored within institutional processes or influences political behavior.
A substantial portion of this research emphasizes formal features—national evaluation policies, legislative frameworks, and oversight bodies. Analysts commonly cite examples such as South Africa, Uganda, Benin, Ghana, and Kenya, where dedicated evaluation units and public budgeting provisions signal institutional intent. Goldman et al. (2018) describe such initiatives as deliberate state-led investments in evaluation infrastructure . Peer learning platforms like CLEAR-AA and Twende Mbele further illustrate policy diffusion. Yet many analyses assess institutionalization through narrow indicators, such as the existence of a decree or the number of evaluations conducted. This approach risks conflating formal adoption with functional integration.
Scott’s (2001) institutional theory offers a sharper lens to interrogate this limitation . While legal codification marks the regulative pillar of institutions, sustained institutionalization depends on normative alignment and cognitive internalization. Meyer and Rowan (1977) introduced the concept of institutional isomorphism to explain how organizations adopt formal structures to project legitimacy, even when actual practice remains unchanged . In numerous African contexts, this pattern persists. Evaluation frameworks exist on paper but often fail to shift decision-making, influence resource allocation, or shape accountability.
Assuming that evaluation operates as a neutral, technical instrument obscures the power relations that structure its use. Picciotto (2020) critiques the technocratic framing prevalent in donor-driven models and urges greater political sensitivity . Similarly, Scartascini and Tommasi (2020) argue that the institutionalization of evaluation cannot be reduced to technical capacity. It requires political backing and coalitions capable of sustaining reform . Evaluation gains relevance only when it aligns with the strategic calculus of actors who hold authority. Without such alignment, evaluation remains symbolic, or contested.
Political economy insights deepen this critique. Grindle (1997) shows that reform trajectories reflect domestic incentive structures and elite configurations . Andrews (2013) demonstrates that externally imposed systems often unravel where bureaucratic ownership is weak or contested . In centralized political systems, ministers may suppress evaluations that threaten their discretionary autonomy. In decentralized or fragmented contexts, overlapping institutional mandates and coordination failures obstruct evaluation uptake. These dynamics are especially salient in the agriculture and economic growth sectors, where institutional rivalries and distributional conflicts are widespread.
Research tends to emphasize health and education, where international funding structures, metric-based accountability, and global reporting systems support evaluation efforts. These sectors benefit from sustained donor engagement and harmonized performance frameworks. In contrast, agriculture and economic governance receive limited scholarly attention, despite their critical role in structural transformation. Chinsinga and Poulton (2014) showed that agricultural evaluations in Malawi often align with donor funding cycles rather than national policy imperatives . In Senegal, Sipoaka and Cabral (2022) identified weak influence of evaluations on investment decisions in economic investment initiatives . Sambo (2022) noted similar disconnections between evaluation outputs and planning processes in Nigeria’s economic reforms . These examples indicate that institutionalization varies across sectors and reflects divergent administrative traditions and configurations of stakeholder interests.
Another overlooked area concerns the role of knowledge systems and epistemic legitimacy. Evidence does not operate in a vacuum. It gains acceptance when it aligns with established ideational frames and policy narratives. Fischer (1998) and Stone (2002) emphasize that institutional cognition plays a decisive role in how decision-makers interpret, validate, or disregard evaluative knowledge . Within many African governance contexts, bureaucrats and senior officials may favor certain forms of evidence over others based on ideological alignment or perceived threats to institutional authority. Few studies have examined how evaluative findings circulate within national policy communities, or how conflicting interpretations contest their relevance and legitimacy. This omission leaves unresolved the question of how evidence acquires traction within political decision arenas.
A significant portion of the literature depends on donor-generated documents, advocacy summaries, and interview-based accounts. While these sources offer some insights, they tend to highlight operational success and downplay deeper structural impediments. Danhoundo et al. (2018) caution that reliance on such sources often promotes donor learning agendas rather than institutional reflection at the national level . Gugerty (2008) critiques the preoccupation with procedural conformity, which masks the absence of adaptive institutional change . Tusubira and Kasigwa (2020) illustrate how donor-driven evaluation systems in Uganda fostered fragmented priorities rather than coherent national planning . These limitations raise concerns about the interpretive validity of findings grounded solely in externally curated materials.
To address these concerns, this research departs from standard methodological practices. Rather than privileging self-reported data or interview narratives, it draws upon primary sources such as legislative acts, planning frameworks, budget laws, and administrative circulars. These documents offer a more stable foundation for tracing institutional evolution over time. Riedelbauch et al. (2025) argue that documentary analysis yields stronger analytical traction by anchoring claims in verified policy artifacts rather than perceptions or donor assessments . This study aligns with that recommendation by examining official texts from 28 African countries, produced between 2010 and 2024, to assess how evaluation became part of formal governance systems.
An enduring gap in the literature concerns the absence of a coherent typology capable of explaining divergent pathways of institutionalization. Donor frameworks tend to impose linear classifications such as nascent, emerging, or mature that prioritize programmatic outputs over institutional realities. These categories rarely account for discrepancies in functionality, internal coherence, or normative legitimacy. Drawing from Scott’s (2001) conceptual architecture of institutions , this research introduces an alternative typology. It distinguishes between systems with legal anchoring but low normative integration, others where formal structures coexist with informal appropriation, and those where cognitive uptake of evaluation has occurred. This approach provides a more nuanced view of institutional depth and challenges static models derived from donor programming cycles.
Despite the growth of scholarly interest in the institutionalization of evaluation in Africa, analytical gaps persist. Most studies prioritize legal provisions and organizational presence, with limited attention to political logic, sectoral differentiation, or epistemic tensions. Evaluation practices often reflect external mandates rather than endogenous institutional evolution. Descriptive mapping continues to prevail, while integrative analyses that link policy design, bureaucratic response, and political calculus remain scarce. This research responds to these shortcomings through a multidimensional framework that draws on institutional theory, reform political economy, and administrative systems analysis. It emphasizes how legal mandates, policy routines, and strategic incentives combine to determine the uptake and institutional durability of evaluation. The typology proposed repositions evaluation as a negotiated and adaptive function of governance, rather than a checklist of compliance with global norms.
4. Materials and Methods
4.1. Sample Countries Considered and Data Collection Approach
The research adopted a documentary analysis approach to investigate the institutionalization of public policy evaluation across African countries, with a primary focus on agriculture and economic development. The review covered official government documents issued between 2010 and 2024. Twenty-eight countries met the inclusion criteria: availability of public evaluation-related documents, evidence of institutional commitment to evaluation within the targeted sectors, and administrative conditions that allowed meaningful comparison. These criteria enabled purposive selection across a broad spectrum of governance contexts and sectoral experiences.
The analysis covered 168 official documents retrieved from ministries in charge of planning, finance, economy, and public administration, as well as national statistical agencies and dedicated evaluation entities. Key records included national development plans, budget implementation reviews, sectoral policy frameworks, and legal mandates defining institutional responsibilities for evaluation. These sources revealed the formal architecture through which states structured and operationalized their evaluation functions. The corpus excluded advocacy publications, opinion pieces, and third-party reports, focusing exclusively on government-issued records. A full inventory of countries, institutional sources, document types, and access portals appears in Table 1. Document totals by region were as follows: 58 from West Africa, 33 from East Africa, 31 from Southern Africa, 18 from Central Africa, 16 from North Africa, and 12 from Island States.
Table 1. Overview of National Sources, Documents, and Web Portals by Region.

Region

Countries Included

Documents Retrieved

Web Links

Total Documents

West Africa

Senegal, Ghana, Nigeria, Benin, Burkina Faso, Côte d’Ivoire, Togo, Mali

NEPs, sector strategies, budget documents, development plans, evaluation unit mandates, M&E reports

Senegal: https://www.finances.gouv.sn

Ghana: https://ndpc.gov.gh

Nigeria: https://budgetoffice.gov.ng

Benin: https://www.gouv.bj/ministere/mdc

Burkina Faso: http://www.insd.bf

Ivory Coast: https://www.economie.gouv.ci

Togo: https://plan.gouv.tg

Mali: https://www.finances.gouv.ml

58

East Africa

Uganda, Kenya, Rwanda, Ethiopia, Tanzania

Performance reports, development plans, policy reviews, evaluation strategies

Uganda: https://opm.go.ug

Kenya: https://www.knbs.or.ke

Rwanda: https://www.minecofin.gov.rw

Ethiopia: https://www.planning.gov.et

Tanzania: https://www.mof.go.tz

33

Southern Africa

South Africa, Zambia, Zimbabwe, Malawi, Mozambique

Evaluation reports, reform documentation, strategic plans, M&E frameworks

South Africa: https://www.dpme.gov.za

Zambia: https://www.mofnp.gov.zm

Zimbabwe: https://www.zimtreasury.gov.zw

Malawi: https://www.npc.mw

Mozambique: https://www.mef.gov.mz

31

Central Africa

Cameroon, DRC, Chad

Development strategies, evaluation records, sector frameworks, budget evaluations

Cameroon: https://www.minepat.gov.cm

DRC: https://www.plan.gouv.cd

Chad: https://www.finances.gouv.td

18

North Africa

Egypt, Tunisia, Morocco

Evaluation strategies, statistical reports, national planning frameworks

Egypt: https://mped.gov.eg

Tunisia: http://www.ins.tn

Morocco: https://www.hcp.ma

16

Island States

Cabo Verde, Mauritius, Seychelles

Strategic development plans, evaluation reports, budget frameworks

Cabo Verde: https://www.mf.gov.cv Mauritius: https://mof.govmu.org Seychelles: https://www.nbs.gov.sc

12

A total of 138 documents were retrieved from global and regional repositories to strengthen triangulation and reinforce empirical consistency. The dataset included 20 national evaluation profiles and policy synthesis reports from the African Evaluation Association (AfrEA), alongside 25 institutional diagnostics, system reviews, and readiness assessments drawn from the Centre for Learning on Evaluation and Results–Anglophone Africa (CLEAR-AA). Thirty additional records related to national strategy reviews and evaluation capacity assessments were extracted from the United Nations Development Programme’s Evaluation Resource Centre. An additional 42 documents covering peer reviews, comparative diagnostics, and institutional case studies were sourced from the repositories of the OECD Development Assistance Committee (DAC) EvalNet, the African Development Bank, and the World Bank’s Independent Evaluation Group. The final subset consisted of 21 documents retrieved from EvalSDGs, International Development Evaluation Association (IDEAS), and selected national audit institutions. These materials included professional guidelines, evaluator training resources, and public audit reviews relevant to evaluation system development. The full breakdown of institutional platforms, documents retrieved, and source portals is presented in Table 2.
Table 2. Regional and Global Repositories of Evaluation System Documentation.

Institution or Platform

Type of Documents Retrieved

Number of Documents

Web Link

AfrEA (African Evaluation Association)

Country evaluation profiles, National Evaluation Policy (NEP) syntheses, regional conference proceedings

20

https://www.afrea.org

CLEAR-AA (Centre for Learning on Evaluation and Results – Anglophone Africa)

Evaluation system diagnostics, readiness assessments, institutional reviews

25

https://www.wits.ac.za/clear-aa

UNDP Evaluation Resource Centre (ERC)

National M&E strategy reviews, country programme evaluations, evaluation capacity assessments

30

https://erc.undp.org

OECD-DAC EvalNet

Peer reviews of evaluation systems, comparative assessments, policy briefs

10

https://www.oecd.org/dac/evaluation

African Development Bank (AfDB)

Country strategy papers, independent evaluations, sectoral policy reviews

12

https://www.afdb.org/en/documents

EvalPartners / EvalSDGs

Global evaluation guidelines, SDG-focused evaluations, voluntary national reviews

10

https://evalsdgs.org

IDEAS (International Development Evaluation Association)

Position papers, professional standards, cross-country comparisons, evaluator training materials

5

https://ideas-global.org

National Audit Institutions (selective)

Performance audits, implementation reviews, M&E quality audits (e.g., Uganda, Nigeria, South Africa)

18

Country-specific; e.g., https://www.oag.go.ug

World Bank IEG (Independent Evaluation Group)

Public sector evaluation diagnostics, meta-evaluations, capacity-building tools

10

https://ieg.worldbankgroup.org

African Union / NEPAD / APRM

Governance and performance monitoring reports, peer review mechanisms, country self-assessments

8

https://au.int, https://www.aprm-au.org

The integration of national documentation with regional and global sources enabled robust cross-verification and minimized information gaps. Country-level records revealed the legal and organizational evolution of evaluation systems. Continental and international platforms supported comparative analysis and contextual alignment. This combination ensured analytical rigor, coherence, and a solid empirical foundation to assess how evaluation systems have emerged and consolidated across African public governance contexts.
4.2. Data Consolidation
The consolidation process applied strict inclusion parameters to ensure methodological consistency and uphold empirical integrity. Only documents issued between 2010 and 2024 by national institutions vested with statutory authority in planning, finance, statistics, or evaluation oversight qualified for inclusion. Records lacking institutional attribution or formal validation were systematically excluded. Materials produced through external consultancies, bilateral donors, or informal advocacy channels did not meet credibility thresholds and were omitted from the corpus.
The validated corpus was structured into four analytical categories to enable systematic coding and cross-country comparison (Table 3). The first category, Legal Mandates, comprised documents that formalized evaluation functions within constitutional texts, regulatory statutes, or decrees issued by the executive. The second, Normative Alignment, focused on institutional consistency with national administrative procedures and professional standards recognized regionally or globally. The third, Cognitive Uptake, referred to the extent to which evaluation informed budgeting, program adjustment, and organizational learning. The fourth, Hybrid Systems, captured institutional arrangements that combined formal rules with informal practices, often reflecting selective compliance, tacit norms, or transitional configurations. The typology allowed for detailed differentiation between formal adoption and embedded functionality.
Table 3. Data Codification.

Coding Category

Description

Analytical Dimensions

Document Sources

Examples of Evidence

Legal Mandates

Formal statutes, policies, and legal instruments establishing evaluation authority

Presence, clarity, enforceability, and scope of evaluation mandates

National legislation, government decrees, regulations

Evaluation Acts, policy frameworks, institutional directives

Normative Alignment

Alignment of evaluation practices with administrative standards and professional norms

Institutional coherence, consistency with international or regional evaluation standards, adherence to professional ethics

Strategy documents, national planning guidelines, professional standards issued by AfrEA or CLEAR-AA

Evaluation guidelines, ethics protocols, regional evaluation frameworks

Cognitive Uptake

Evidence of how evaluation results integrate into decision processes and influence organizational practices

Degree of incorporation in strategic plans, budgeting, policy revisions, and internal learning

Strategic and development plans, policy revisions, planning frameworks, institutional reports

Evidence-based policy adjustments, budgetary allocations influenced by evaluation findings

Hybrid Systems

Interaction between formal evaluation requirements and informal administrative practices or partial adherence

Indicators of selective compliance, informal norms supplementing or undermining formal rules, tactical use of evaluation outcomes

Program audits, evaluations, institutional assessments, reports from UNDP, AfrEA, and CLEAR-AA repositories

Selective adherence to evaluation cycles, informal performance reporting, strategic non-compliance

The coding strategy followed a well-established tradition in institutional research that treats public records as reliable indicators of governance architecture. As noted in the contribution of Mogalakwe, documentary methods offer a systematic approach to reconstructing institutional arrangements through the review of authenticated public texts . The structural framework proposed by Cloete underscored the importance of legal authority, administrative integration, and functional clarity in identifying mature evaluation systems . The interpretive insights of Chirau et al. added further precision by linking shifts in policy discourse to recalibrated political intent and evolving bureaucratic priorities .
The coding matrix enabled rigorous pattern identification across the 28-country dataset. States that institutionalized evaluation as a tool for public oversight anchored their systems in legislation, incorporated them into planning and budgeting processes, and assigned operational responsibilities to accountable institutions. In contrast, states with declarative or symbolic systems lacked the procedural mechanisms and institutional support required for sustained implementation. This structured classification strengthened analytical validity through systematic interrogation of authoritative documentary sources . Consequently, it clarified distinct trajectories of evaluation system development across diverse African governance contexts.
4.3. Data Analysis and Interpretation Strategy
The analytical process applied a multi-layered content analysis grounded in institutional theory, comparative public administration, and political economy. The interpretive lens focused on distinct institutional features that influence how evaluation becomes embedded within state systems. These features included legal authority, organizational configuration, procedural integration, epistemic framing, and sectoral application. Each dimension contributed to an understanding of how evaluation practices align with formal mandates and administrative coherence.
Theoretical guidance drew on Scott’s framework of institutional pillars, where regulative, normative, and cognitive structures shape organizational behavior and legitimacy . The analysis also incorporated reform dynamics reflected in the work of Andrews on institutional persistence , and Grindle’s perspective on political incentives in bureaucratic reform . Together, these frameworks allowed structured interpretation of state documents while accounting for institutional variability and policy context.
The analytical structure enabled systematic interrogation of formal institutional arrangements, administrative standards, and evidence utilization pathways. MAXQDA software supported the primary analysis, supplemented with manual verification to ensure contextual sensitivity and thematic consistency. Documents were interpreted in relation to their capacity to operationalize evaluation functions within planning, budgeting, and implementation systems. Attention focused on formal references to evaluation mechanisms and institutional roles, as well as discursive markers that conveyed organizational positioning toward evidence use. Documents were assessed individually, then compared across countries to trace commonalities and divergence in institutional pathways.
Mogalakwe emphasized the utility of documentary analysis for tracing governance systems, where public records reflect underlying regulatory logic and institutional intent . Cloete introduced a structural model in which legal provisions, functional assignments, and coherence across administrative units signal system maturity . Chirau et al. further demonstrated how discursive shifts embedded in public documents often reflect recalibrated bureaucratic objectives and political realignment . These contributions informed the deployment of a coding matrix that mapped the analytical dimensions onto the country-level dataset.
Each coded category contributed to a structured typology of evaluation system configurations (Table 3). The analytical system facilitated structured comparisons across country cases without reliance on rigid benchmarks. It enabled identification of institutional forms through the lens of state capacity. It also considered administrative intent and the degree of alignment with policy relevance. The strategy ensured that interpretation remained empirically grounded while aligned with theoretical expectations across diverse governance environments.
4.4. Limitations
This research uses formal documentation to analyze how evaluation systems evolve across African states. Official texts provide clarity on institutional mandates and signal reform intentions. They reveal how governments frame evaluation within development strategies. However, documents alone offer limited access to informal decision structures and tacit operational behavior. Mogalakwe (2006) recognizes that documents can clarify institutional architecture but do not expose the full spectrum of administrative dynamics .
Access to government records remains uneven across the 28 countries. Some systems, such as Kenya and South Africa, disclose detailed archives. Others release only intermittent policy statements. Variation in availability does not always reflect institutional effectiveness. Cloete (2009) notes that coherence in institutional logic may carry more weight than the number of documents issued .
Documents often reflect donor influence in content and scope. Reports may present alignment with international frameworks without describing implementation gaps. Ebrahim (2016) warn that such materials can prioritize accountability to funders rather than internal learning . Meyer and Rowan (1977) add that formal rules may conceal functional divergence . Documents should therefore be treated as structured representations, not objective accounts. Future research would benefit from integrating internal reviews and sectoral performance audits to trace how evaluation becomes embedded within real governance practice.
5. Results
Among 28 African countries, only five, Uganda, South Africa, Rwanda, Benin, and Kenya, demonstrate strong evaluation systems with legal enforcement and sectoral integration. Most others show limited use, despite formal policies. Uptake remains low in agriculture and economic growth. Political incentives support institutional uptake where evaluation aligns with sector priorities and integrates within planning frameworks. In contrast, weak authority and fragmented structures restrict effective system use.
5.1. Legal and Policy Foundations of Evaluation in Agriculture and Economic Growth
Between 2010 and 2024, 21 of 28 African countries enacted national evaluation policies, planning decrees, or performance instruments that explicitly mention evaluation. However, legal visibility does not equate to institutionalization. The analysis reveals a wide spectrum of practices, from effective integration to symbolic adoption (Table 4). In agriculture and economic growth sectors, domains central to national development, disparities in legal codification and institutional practice are particularly pronounced.
Table 4. Typologies of Evaluation Institutionalization in 28 African Countries (2010–2024).

Typology

Representative Countries

Key Features

Deep Institutionalization

Uganda, South Africa, Rwanda, Benin, Kenya

Legal mandates enforced; evaluation integrated into sector planning and policy.

Formal Commitments

Nigeria, Senegal, Côte d’Ivoire, Mali, Gambia

Evaluation featured in national plans without legal enforcement; weak traction.

Administrative Integration

Ghana, Ethiopia, Zambia, Egypt, Mauritius

Embedded in planning routines; lacks statutory grounding and authority.

Fragmented Frameworks

Tanzania, Mozambique, Malawi, Burkina Faso, Morocco, Seychelles

Evaluation mentioned in policies; no institutional coherence or sectoral link.

Dormant Legal Structures

Togo, Chad, Cameroon, DRC, Zimbabwe, Tunisia

Evaluation laws issued; no follow-through in sector implementation.

Deep Institutionalization with Sectoral Traction. Only five countries, Uganda, South Africa, Rwanda, Benin, and Kenya, reflect deep institutionalization with sectoral traction. In these cases, legal mandates translate into operational systems embedded in core governance. Uganda’s policy requires sector evaluations under the authority of the Prime Minister’s Office. South Africa links cabinet decisions to evidence produced through ministerial evaluations. Kenya enforces integration through national M&E guidelines. In Benin, a presidential decree assigns evaluation responsibilities across all ministries. Rwanda uses Imihigo contracts to link subnational performance with national planning. Agriculture, infrastructure, and industry ministries help define evaluation priorities and apply results in planning and budgeting. These countries move beyond symbolic adoption. Legal provisions function as both enforcement instruments and strategic mechanisms for coordination and resource allocation.
Formal Commitments without Enforcement. Nigeria, Senegal, Côte d’Ivoire, Mali, and The Gambia exhibit visible but shallow commitment. Evaluation features in national strategies or results frameworks, yet there are no binding legal mandates or enforceable obligations. For example, Nigeria’s Economic Recovery and Growth Plan contains performance indicators but lacks institutional directives for evaluation implementation. Ministries operate independently of any central coordination mechanism. In these contexts, evaluation serves political signaling more than policy correction.
Administrative Integration without Legal Codification. Ghana, Ethiopia, Zambia, Egypt, and Mauritius adopt evaluation through planning tools but fall short of formal legalization. Medium Term Agriculture Sector Investment Plan (METASIP) in Ghana and Zambia’s Sixth National Development Plan (SNDP) include review mechanisms, but these remain internal exercises, rarely linked to legal frameworks or budgetary leverage. Evaluation becomes a routine administrative activity rather than a binding governance function.
Institutional Disconnection and Sectoral Isolation. Six countries, including Tanzania, Mozambique, Malawi, Burkina Faso, Morocco, and Seychelles, show fragmented institutional frameworks. Evaluation is referenced in sectoral policies, but legal texts lack coordination mandates or enforcement provisions. Ministries operate in silos, and evaluation serves mostly project compliance. Official plans reflect rhetorical support, not institutional discipline.
Legal Adoption without Operationalization. Togo, Chad, Cameroon, the DRC, Zimbabwe, and Tunisia have issued decrees or evaluation policies. However, institutional structures remain dormant. Sectoral ministries are excluded from planning or implementation, and no evidence indicates follow-up or systematic use. Legal texts exist on paper, but no operational mechanisms accompany them.
These findings confirm that legal instruments alone do not ensure institutionalization. Systems succeed when formal rules align with sectoral dynamics and receive sustained support from both political leadership and capable institutions. As Scott (2001) argues, meaningful institutionalization requires coherence between regulative norms and organizational behavior . Without this alignment, evaluation remains symbolic.
5.2. Evaluation Use in Sector Coordination and Decision-Making
The analysis reveals that despite the expansion of national evaluation frameworks, actual use of evaluation to inform coordination and decision-making in agriculture and economic growth remains inconsistent and often superficial (Table 5). Evaluation systems function effectively only where institutional structures align with sectoral mandates and political leadership supports evidence use. Coordination must also be structured across ministries, and planning or budgeting frameworks need to incorporate evaluation findings. Without these elements, evaluation remains either ceremonial or externally driven, disconnected from national policy cycles.
Table 5. Constraints and Enablers of Evaluation Use in Agriculture and Economic.

Category

Constraint

Countries (Constraint)

Enabler

Countries (Enabler)

Institutional Capacity

No sector-based units, weak staffing, no evaluation budget in sector ministries

Malawi, Burkina Faso, Côte d’Ivoire, Chad, Ethiopia, Mozambique, Nigeria, Mali, Ghana, DRC, Guinea, Senegal, Sierra Leone, Togo, Liberia, Cameroon, Zambia

Dedicated sector units, operational budgets, trained personnel

Rwanda, Uganda, South Africa, Kenya, Benin

Political Economy

Lack of demand from leaders, fear of exposure, no political sponsor

Nigeria, Ethiopia, Côte d’Ivoire, Ghana, Mozambique, Chad, Guinea, Cameroon, Mali, DRC, Zambia

Alignment with national reforms, performance contracts, executive support

Rwanda, Uganda, Kenya

Sectoral Coordination

Ministries isolated from central systems, parallel donor evaluations

Malawi, Mozambique, Guinea, Mali, Liberia, Chad, Ghana, Sierra Leone, Ethiopia, Zambia, Cameroon, Togo, DRC, Senegal, Côte d’Ivoire, Nigeria, Burkina Faso, Benin, Tanzania

Inter-ministerial committees, sector focal points, joint evaluation planning

Rwanda, South Africa, Uganda, Kenya

Utilization Mechanisms

No links to planning or budget cycles, evaluation findings not used in reports

Ghana, Nigeria, Mozambique, Liberia, Chad, Mali, DRC, Sierra Leone, Ethiopia, Zambia, Senegal, Côte d’Ivoire, Cameroon, Togo, Malawi, Guinea, Burkina Faso, Tanzania, Benin, Gambia

Budget integration, performance reviews, mandated use in sector reports

Kenya, Rwanda, South Africa, Uganda

Institutional Capacity. Most African countries host evaluation units within planning ministries or Cabinet offices, yet these units often fail to shape decisions in agriculture or economic governance. In 17 systems, agricultural ministries lack evaluation staff, budget allocations, and links to central structures. Malawi and Burkina Faso illustrate this disconnect, where central units exist without enforcement authority. Rwanda, Uganda, South Africa, Kenya, and Benin provide stronger examples. Each country has embedded evaluation within sectoral governance. These cases show that institutional capacity depends on operational resources, technical expertise, and consistent use within sector planning and oversight.
Political Economy. Political incentives determine whether evaluation becomes a tool for reform or remains symbolic. Rwanda, Uganda, and Kenya illustrate how leadership commitment transforms evaluation into a mechanism for performance oversight. Their national strategies incorporate scorecards, contractual accountability, and Treasury oversight. In contrast, 11 countries, including Nigeria, Ethiopia, and Côte d’Ivoire, avoid using evaluation despite formal policies. Leaders often resist exposing failures or reversing decisions. As political economy theory suggests, institutional change depends on how well evaluation aligns with leadership priorities and mobilizes reform coalitions .
Sectoral Coordination. Agriculture and economic growth ministries often operate outside formal evaluation frameworks. In 19 countries, sector plans include output indicators but lack institutional links to central evaluation structures. Evaluations of agriculture or industrial strategies are often isolated, programmatic, or donor-driven. The data confirm that coordination mechanisms such as inter-ministerial evaluation committees or sectoral evaluation focal points, are either absent or non-functional in most countries. Rwanda and South Africa stand out for institutionalizing inter-ministerial coordination in evaluation planning and reporting. Their approach reinforces cross-sector learning and supports alignment of sector strategies with national priorities.
Utilization Mechanisms. The data show that in 20 of the 28 countries, evaluation findings are not integrated into mid-term reviews, budget allocations, or policy revisions. While evaluations are conducted, they are rarely cited in sector progress reports, planning frameworks, or resource negotiations. However, countries like Kenya and Rwanda provide structured examples where evaluation findings are formally considered in annual policy and fiscal reviews. In these cases, performance budgeting systems, legal mandates, and accountability scorecards compel ministries to use findings in adjusting sector targets and resource allocations. This supports Picciotto’s (2013) proposition that evaluation must be embedded in governance routines to achieve operational traction .
5.3. Institutional Patterns and State Capacity for Evaluation Use
Institutional patterns of evaluation in Africa reflect broader dynamics of state capacity, reform incentives, and policy learning. Legal frameworks and national evaluation policies (NEPs) are now widespread, but their practical impact depends on political ownership, administrative design, and the degree to which evaluation is embedded in sectoral governance. The variation observed across 28 countries reveals five trajectories, each representing distinct levels of institutional maturity and political intent (Table 6).
Table 6. Typology of Evaluation Systems in Africa by Institutional Features and Country.

Typology

Defining Features

Representative Countries

Embedded Strategists

Legal mandates, executive support, sector uptake, budget and policy alignment

Uganda, South Africa, Rwanda, Benin, Kenya

Normative Mimics

Donor-shaped NEPs, weak enforcement, symbolic sector presence

Nigeria, Mali, Côte d’Ivoire, Senegal, Gambia

Performance Anchored

Evaluation in budget or planning processes, low strategic leverage

Ghana, Ethiopia, Zambia, Egypt, Mauritius

Procedural Monitors

Mention in strategies, weak structures, low utilization

Tanzania, Mozambique, Malawi, Morocco, Seychelles, Burkina Faso

Symbolic Compliers

Formal references without implementation or policy impact

Togo, Cameroon, Chad, DRC, Zimbabwe, Tunisia

Embedded Strategists. Uganda, South Africa, Rwanda, Kenya, and Benin demonstrate the most advanced institutionalization. Evaluation functions are integrated into executive oversight, budgeting, and sectoral planning. Rwanda’s Imihigo framework links evaluation to performance-based accountability across administrative levels. The mechanism channels results into national planning cycles through structured feedback loops. Uganda’s Office of the Prime Minister coordinates sector evaluations and links them to cabinet decisions, which strengthens institutional authority and ensures policy coherence. South Africa’s Department of Planning, Monitoring and Evaluation uses sector reviews to adjust cabinet-approved plans. These countries exhibit “third-order” institutional change, where evaluation redefines governance routines and reinforces state capability through learning and adaptation .
Normative Mimics. In Nigeria, Mali, Côte d’Ivoire, Senegal, and Gambia, evaluation frameworks respond more to external expectations than internal demand. NEPs and M&E guidelines exist, often developed with donor support, but lack enforcement or sectoral traction. Ministries use evaluation for diplomatic signaling rather than reform. Nigeria’s evaluation system, despite legal backing, remains disconnected from agricultural or fiscal decision-making. Evaluation is marginal in policy debates and rarely informs budget cycles. This mimics global norms without functional integration, consistent with DiMaggio and Powell’s (1983) concept of isomorphism .
Performance Anchored. Countries like Ghana, Ethiopia, Zambia, Egypt, and Mauritius show partial uptake through integration with performance budgeting or planning frameworks. Ghana’s METASIP II and Ethiopia’s Agricultural Growth Program include periodic evaluations, but findings do not inform structural policy change. These systems emphasize technical compliance, often managed by finance or planning ministries, while sectoral ministries lack ownership or strategic guidance. Evaluation functions more as a procedural instrument than a catalyst for innovation.
Procedural Monitors. Tanzania, Mozambique, Malawi, Morocco, Seychelles, and Burkina Faso include evaluation in strategies but fail to operationalize it. Evaluations occur sporadically, often externally driven, and lack follow-up mechanisms. Ministries view evaluation as a reporting task rather than a decision support system. Resource limitations, fragmentation, and unclear mandates undermine uptake. These countries illustrate Booth and Cammack’s (2013) “islands of effectiveness,” where efforts remain isolated from governance systems .
Symbolic Compliers. In Togo, Cameroon, Chad, the Democratic Republic of Congo, Zimbabwe, and Tunisia, legal or policy instruments mention evaluation, but institutional action is absent. M&E units lack staff, resources, or coordination. Evaluations are rare, often disconnected from national strategies, and rarely used. Political transitions, administrative weakness, and fiscal stress contribute to symbolic adoption. Evaluation exists in form but not in function. It operates as a reputational device rather than a mechanism of accountability.
6. Discussion
The discussion reveals that evaluation institutionalization in Africa follows diverse, non-linear trajectories shaped more by political incentives and institutional dynamics than formal policies. While some countries show strong use of evaluation in agriculture and economic growth, most face weak uptake due to fragmented structures and limited strategic integration. The analysis draws from institutional theory and political economy to highlight how legal frameworks alone do not guarantee use. Strategic embedding, sectoral coherence, elite sponsorship, and incentive alignment emerge as key enablers. African experiences challenge linear models from the Global North and suggest a reframing of evaluation as a governance capability. This discussion proposes a context-driven understanding of how evaluation systems evolve, gain traction, and contribute to policy reform and state effectiveness.
6.1. Thematic Discussion: Interpreting Institutionalization Trajectories
Only five countries show strong evaluation use with sector-level application and legal traction. Most others lack political backing or institutional pathways for uptake. Despite formal policies, evaluation remains underused in agriculture and economic growth due to fragmented structures and weak alignment with national planning and decision systems.
The typology underscores that institutionalization extends beyond legal codification or national frameworks. Many African evaluation systems exhibit formal attributes without routine application. This disconnect supports Meyer and Rowan’s (1977) concept of institutional decoupling, where symbolic reforms fail to alter core practices . Traditional models often equate structure with function, yet the African experience reveals a more contingent dynamic. Institutionalization evolves through the interaction of authority, sectoral relevance, and organizational incentives. Countries that succeed embed evaluation in statecraft, not through formal presence alone but through strategic integration. This redefinition shifts focus from static frameworks to the conditions that sustain evaluation within governance systems.
Mahoney and Thelen’s (2010) framework explains divergent institutional trajectories . Rwanda and South Africa embedded evaluation within governance structures and secured political support. Nigeria and Côte d’Ivoire adopted policies without bureaucratic alignment. Tunisia and Chad face stalled implementation due to limited capacity. Evaluation systems follow non-linear paths. States absorb reforms based on political calculus and institutional context. Pritchett et al. (2010) warn that convergence often masks structural gaps . Functionality depends on domestic authority and feedback systems rather than mimicry or formal compliance.
African trajectories reframe evaluation as a governance instrument shaped through institutional dynamics. In Kenya and Uganda, evaluation acquires relevance when situated within policy cycles and reinforced through executive authority. It orients decision processes and influences how institutions pursue performance. This strategic use contrasts with donor models that treat evaluation as a neutral technique. Fischer (1998) argues that evidence always reflects political mediation . The African experience confirms that evaluation achieves influence through institutional authority and alignment with state objectives. Its traction results not from methodological precision but from integration into the core functions of governance.
A major contribution to global evaluation theory lies in elevating political incentives as the core determinant of system uptake. Existing frameworks often associate use with capacity or dissemination. African evidence points to political sponsorship as the decisive factor. Rwanda’s Imihigo and South Africa’s Cabinet reviews reveal how leadership embeds evaluation in governance. Grindle (2007) underscores that reform outcomes reflect alignment between institutional ambition and political authority . In many African contexts, evaluation serves either to reinforce regime legitimacy or to structure elite coordination, depending on institutional anchoring and strategic utility.
The research underscores the centrality of institutional coherence in enabling effective evaluation. Fragmented ties between national systems and sector ministries reduce the credibility and consistency of evidence use. Evaluation fails to shape decisions when ministries operate in isolation or bypass coordination structures. Ostrom (2005) identifies intermediate institutions as vital for linking central mandates with sector realities . Benin and Kenya provide evidence of how focal points, review councils, and inter-ministerial platforms secure policy relevance. Coherence arises through shared routines and iterative engagement, not formal decree. Where alignment occurs, evaluation supports planning processes and reinforces authority within the broader governance framework.
The diversity of institutional paths calls for a contextual theory of evaluation cultures. Standard typologies often impose universal models rooted in OECD or DAC principles. African cases diverge. Uganda and Rwanda reflect adaptive cultures linked to performance reforms. Tunisia and DRC exhibit symbolic systems shaped by external expectations. Evaluation cultures reflect institutional legacies shaped through administrative practice and reinforced through patterns of authority and state engagement. In post-authoritarian regimes, evaluation may serve to display performance. In developmentalist states, it supports planning. In fragile contexts, it often reinforces administrative control. Evaluation systems function within specific epistemic and institutional environments and must be understood through their embedded political and historical context.
The findings challenge assumptions embedded in Global North models and call for theory grounded in African institutional and political realities. Evaluation in Africa does not follow linear transitions. It evolves through negotiation, reinterpretation of norms, and strategic adaptation to local governance demands. Governments reshape evaluation into instruments of statecraft to align them with executive authority and sectoral policy priorities (Peters, 2019) . Pritchett et al. (2013) caution against equating institutional form with functional use . Cases such as Rwanda and Uganda demonstrate that political incentives, not technocratic design, sustain evaluation systems. Rather than viewing African systems as incomplete, global theory must acknowledge their endogenous reform processes, and contribution to public sector capability .
6.2. Evaluation as State Capability: Pathways for Africa and Global Theory
6.2.1. Evaluation as a Core State Function
Evaluation, when rooted in institutional frameworks, reinforces the strategic functions of the state. In agriculture and economic growth sectors, its role becomes visible where governments require credible feedback to adjust priorities or respond to shifting conditions. States facing volatility often rely on evaluation to verify performance, test assumptions, and recalibrate institutional action. When embedded into public systems, evaluation no longer reflects donor compliance. It becomes a mechanism for deliberate governance. Evidence drawn from such systems supports both policy adjustment and reform credibility . Evaluation can influence how decisions unfold, how public institutions align with national goals, and how leadership sustains legitimacy.
Institutional designs that integrate evaluation into planning structures strengthen state capacity. As Pritchett et al. (2013) argue, adaptation depends not only on structure but on the space to revisit and reframe decisions . Ba (2021) confirms that states increase their effectiveness when evaluation informs executive strategic decisions . In countries such as Rwanda or Kenya, evaluation informs fiscal review, guides sectoral choices, and supports cross-ministerial coordination. These examples show that institutionalization gains substance when evaluation operates within legitimate structures, and connects with political authority. The result is a system that reinforces governance from within rather than replicating external templates.
6.2.2. Institutional Pathways to Sector Integration
Fragmentation between national evaluation bodies and sectoral ministries continues to undermine the effectiveness of evaluation systems in Africa. Ministries responsible for agriculture and infrastructure often operate in silos, disconnected from central planning frameworks. This gap limits coordination and weakens the policy value of evaluative work. Effective institutionalization requires tailored designs that assign clear roles to sectoral actors and embed evaluation within planning routines. Thynne and Peters (2015) emphasizes that horizontal coordination across government units supports institutional coherence, especially where overlapping mandates obscure accountability .
Some national systems provide useful lessons. Kenya uses performance contracts to align ministerial outputs with national goals. Uganda institutionalized sector working groups that structure evaluation agendas across key ministries. These mechanisms enable regular oversight and sustain evidence-informed dialogue. Ba (2021) highlights that institutional alignment enhances evaluation’s practical relevance . To improve uptake, governments must link evaluation to flagship initiatives such as agricultural modernization, and infrastructure expansion. When embedded in sector strategies, evaluation contributes to timely review and structured feedback. Institutional designs that foster coherence elevate evaluation from procedural formality to a tool for reform navigation and policy adjustment. This shift strengthens governance capacity and improves alignment between state ambition and delivery.
6.2.3. Building Evaluation-Driven Learning
Many African governments have sought to promote an evaluation culture, yet this ambition remains abstract without operational systems that support collective learning. Culture should reflect an institutional ecosystem where actors, incentives, routines, and evidence interact to support adaptation and critical reflection. Learning does not arise through rhetorical commitments but through deliberate processes that revise assumptions and recalibrate strategies. Argyris (2004), and Schön and Argyris (1996) emphasize the value of double-loop learning in strengthening institutional responsiveness . This mode of learning requires mechanisms that interrogate not just outcomes but the logic behind decisions.
Institutional learning ecosystems rely on recurring practices such as sector reviews, inter-ministerial reflection sessions, and structured policy dialogues. South Africa and Ghana have formalized such mechanisms to support iterative reform. Patton (2011) introduced developmental evaluation as a tool that aligns with this approach . It permits institutional flexibility and facilitates sense-making under conditions of uncertainty. Twende Mbele exemplifies how cross-country platforms sustain learning ecosystems. It fosters peer exchange, and translates emergent knowledge into practice. Governments that invest in these systems elevate evaluation beyond reporting. They reposition it as a core function for public problem-solving. Learning ecosystems make evaluation politically relevant and strategically embedded within governance processes.
6.2.4. Repositioning Africa in Global Evaluation Theory
Global models often present evaluation systems as moving through standardized stages. African experiences challenge this progression. Institutionalization does not unfold evenly but responds to political incentives, administrative capacity, and historical legacies. In Benin, Nigeria, and Mozambique, evaluation systems have evolved through responses to fiscal constraints or shifting demands. These responses produce hybrid arrangements that do not mirror conventional blueprints but reflect context-driven adjustments.
Theoretical perspectives must evolve. OECD-derived models often overlook local institutional behavior. de Sardan (2014) stresses that practical norms and informal rules shape African governance . Goldman et al. (2018) highlight contributions from national policies and regional collaboration to global evaluation thinking . One conceptual advance is the “performance–legitimacy–evaluation triangle,” which links evaluation to political credibility. Another is “adaptive embedding,” where systems develop through incremental alignment with planning and budget functions. Ba (2021) demonstrates that evaluation plays a strategic role only when institutions treat it as integral to decision-making . These insights suggest a shift from procedural templates toward frameworks grounded in political economy and sectoral realities. African cases do not replicate external models but redefine evaluation through local ownership, and evolving governance logics.
6.2.5. Strategic Coalitions and Incentive Structures
Institutionalization of evaluation systems rarely occurs through decree. It evolves through coalitions of reformers, institutional entrepreneurs, and technical allies who champion evidence-informed governance. Grindle and Thomas (1991) demonstrated that reform sustainability hinges on coalitions that cut across bureaucracies, and political elites . Countries such as South Africa and Uganda, which have institutionalized evaluation most effectively, demonstrate strong alliances among planning commissions, finance ministries, and external development partners.
Equally important are the incentives that reinforce evidence use. Performance-based budgeting, ministerial scorecards, and public disclosure norms help establish a motivational architecture that rewards accountability. In Rwanda, the Imihigo system ties ministerial performance to public reporting, while in Kenya, media coverage of results frameworks increases reputational stakes. Such incentive systems enhance transparency and bureaucratic competition, and elevate evaluation as a strategic resource.
Incentives must also align with institutional mission. Where ministries perceive evaluation as punitive or irrelevant, resistance ensues. The articulation of shared objectives, structured feedback processes, and peer recognition mechanisms transforms evaluation from a tool of oversight into an instrument of institutional value creation. This repositioning enhances trust and deepens uptake across technical and political actors.
6.2.6. Reinforcing Economic Transformation Through Evaluation
Agriculture and economic growth sectors remain pivotal in Africa’s development agenda. The institutionalization of evaluation within these domains strengthens transformation by deepening strategic alignment, and enabling coordination rooted in evidence. Ba (2021) emphasizes that evaluation must serve not merely as a verification tool but as a driver of policy relevance, sectoral synergies, and economic transformation .
Effective integration allows governments to track input–output linkages across agricultural subsidies, trade infrastructure, and financial incentives. It enables adaptive reallocation where outcomes diverge from targets. Moreover, when evaluation evidence informs cross-sectoral investments such as rural roads, storage, and access to finance, it fosters the conditions for inclusive growth and productivity.
At a strategic level, institutionalizing evaluation enhances development effectiveness through reform guidance, and strengthened policy credibility. It empowers public actors with the capacity to learn, adjust, and scale what works. In fragile and low-capacity states, this capability becomes a critical asset for transformation.
7. Conclusions
The institutionalization of evaluation in Africa reflects a dynamic process of contestation, adaptation, and strategic choice, rather than a linear progression toward best practice. From 2010 to 2024, African states have increasingly recognized evaluation as a governance capability central to policy learning, institutional performance, and public accountability, especially in agriculture and economic growth. Yet, this recognition has not always yielded systemic functionality. Many evaluation frameworks remain decoupled from implementation, and political traction varies widely across countries.
This research repositions evaluation from a procedural obligation to a strategic lever of state capability and economic transformation. The analysis reveals that durable institutionalization hinges on sectoral coherence, elite incentives, and the integration of evaluation into national planning and budgeting processes. Countries that have progressed beyond symbolic adoption such as Uganda, South Africa, and Kenya, demonstrate that context-aware reforms, and performance-based incentives can embed evaluation within the fabric of governance.
Institutional theory and political economy perspectives help explain the divergence between formal frameworks and actual use. Legal codification alone is insufficient; it must be accompanied by strategic feedback loops, and sector-driven experimentation. Evaluation systems mature not through blueprint replication but through iterative learning, embedded adaptation, and negotiated implementation.
Going forward, repositioning evaluation as a state capability requires African governments and partners to move beyond technocratic fixes. The challenge lies in crafting institutions that not only generate evidence but also cultivate learning ecosystems, reinforce reform credibility, and enable responsive policymaking. In fragile and complex policy environments, robust evaluation systems will be essential for steering national development toward inclusive, accountable, and transformative outcomes.
Abbreviations

AfrEA

African Evaluation Association

DAC

Development Assistance Committee

M&E

Monitoring and Evaluation

NEP

National Evaluation Plan

OECD

Organisation for Economic Co-operation and Development

Author Contributions
Abdourahmane Ba is the sole author. The author read and approved the final manuscript.
Data Availability Statement
The data is available from the authors upon reasonable request.
Conflicts of Interest
The author declares no conflicts of interest.
References
[1] North, D. C. (1990). Institutions, institutional change and economic performance. Cambridge university press.
[2] Peters, B. G. (2019). Institutional theory in political science: The new institutionalism. Edward Elgar Publishing.
[3] Barrett, C. B., Christiaensen, L., Sheahan, M., & Shimeles, A. (2017). On the structural transformation of rural Africa. Journal of African Economies, 26(suppl_1), i11-i35.
[4] Meuleman, L. (2021). Public administration and governance for the SDGs: Navigating between change and stability. Sustainability, 13(11), 5914.
[5] Goldman, I., Byamugisha, A., Gounou, A., Smith, L. R., Ntakumba, S., Lubanga, T.,... & Rot-Munstermann, K. (2018). The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa. African Evaluation Journal, 6(1), 1-11.
[6] Grindle, M. S. (1997). Divergent cultures? When public organizations perform well in developing countries. World development, 25(4), 481-495.
[7] Andrews, M., Pritchett, L., & Woolcock, M. (2017). Building state capability: Evidence, analysis, action (p. 288). Oxford University Press.
[8] Picciotto, R. (2013). Evaluation independence in organizations. Journal of MultiDisciplinary Evaluation, 9(20), 18-32.
[9] DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American sociological review, 48(2), 147-160.
[10] Ba, A. (2021). How to measure monitoring and evaluation system effectiveness. African Evaluation Journal, 9(1), a553.
[11] Koppell, J. G. (2010). World rule: Accountability, legitimacy, and the design of global governance. University of Chicago Press.
[12] Scott, C. (2001). Analysing regulatory space: fragmented resources and institutional design.
[13] Painter, M., & Peters, B. G. (2010). The analysis of administrative traditions. In Tradition and public administration (pp. 3-16). London: Palgrave Macmillan UK.
[14] Bierschenk, T., & Olivier de Sardan, J. P. (2014). States at work: dynamics of African bureaucracies (p. 456). Brill.
[15] Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structures as myth and ceremony. American journal of sociology, 83(2).
[16] Picciotto, R. (2020). From disenchantment to renewal. Evaluation, 26(1), 49-60.
[17] Scartascini, C., & Tommasi, M. (2012). The making of policy: institutionalized or not? American Journal of Political Science, 56(4), 787-801.
[18] Andrews, R. (2013). Representative bureaucracy in the United Kingdom. In Representative bureaucracy in action (pp. 156-167). Edward Elgar Publishing.
[19] Chinsinga, B., & Poulton, C. (2014). Beyond technocratic debates: the significance and transience of political incentives in the Malawi farm input subsidy programme (FISP). Development Policy Review, 32(s2), s123-s150.
[20] Sipoaka, A. L., & Cabral, F. J. (2022). Impact of the RDIA and the building of the Blaise Diagne International Airport on tourism demand and economic growth in Senegal. African Review of Economics and Finance, 14(1), 176-202.
[21] Sambo, U. (2022). Executive immunity clause and its effects on the fight against corruption in Nigeria. African Social Science and Humanities Journal, 3(4), 107-120.
[22] Fischer, F. (1998). Beyond empiricism: policy inquiry in post positivist perspective. Policy studies journal, 26(1), 129-146.
[23] Stone, D. (2002). Using knowledge: the dilemmas of'bridging research and policy'. Compare: A Journal of Comparative and International Education, 32(3), 285-296.
[24] Danhoundo, G., Nasiri, K., & Wiktorowicz, M. E. (2018). Improving social accountability processes in the health sector in sub-Saharan Africa: a systematic review. BMC public health, 18, 1-8.
[25] Gugerty, M. K. (2008). The effectiveness of NGO self‐regulation: theory and evidence from Africa. Public Administration and Development: The International Journal of Management Research and Practice, 28(2), 105-118.
[26] Tusubira, M. K. F. N., & Kasigwa, G. (2020). Ethical Behaviour and Compliance with Donor Reporting Requirements by Non-Governmental Organisations in Uganda: A Proposition.
[27] Riedelbauch, D., Höllerich, N., & Henrich, D. (2023). Benchmarking teamwork of humans and cobots—an overview of metrics, strategies, and tasks. IEEE Access, 11, 43648-43674.
[28] Mogalakwe, M. (2009). The documentary research method–using documentary sources in social research. Eastern Africa social science research review, 25(1), 43-58.
[29] Cloete, F. (2009). Evidence-based policy analysis in South Africa: Critical assessment of the emerging government-wide monitoring and evaluation system. Journal of Public Administration, 44(2), 293-311.
[30] Chirau, T. J., Blaser-Mapitsa, C., & Amisi, M. M. (2021). Policies for evidence: a comparative analysis of Africa’s national evaluation policy landscape. Evidence & Policy, 17(3), 535-548.
[31] Scott, J. (2014). A matter of record: Documentary sources in social research. John Wiley & Sons.
[32] Andrews, M. (2013). The limits of institutional reform in development: Changing rules for realistic solutions. Cambridge University Press.
[33] Mogalakwe, M. (2006). The use of documentary research methods in social research. African Sociological Review/Revue Africaine De Sociologie, 10(1), 221-230.
[34] Ebrahim, A. (2016). The many faces of nonprofit accountability. The Jossey‐Bass handbook of nonprofit leadership and management, 102-123.
[35] Booth, D., & Cammack, D. (2013). Governance for development in Africa: Solving collective action problems. Bloomsbury Publishing.
[36] Hall, P. A. (1993). Policy paradigms, social learning, and the state: the case of economic policymaking in Britain. Comparative politics, 275-296.
[37] Porter, S., & Goldman, I. (2013). A growing demand for monitoring and evaluation in Africa. African Evaluation Journal, 1(1), 9.
[38] Mahoney, J., & Thelen, K. (2010). A theory of gradual institutional change. Explaining institutional change: Ambiguity, agency, and power, 1(1).
[39] Pritchett, L., Woolcock, M., & Andrews, M. (2010). Capability traps? The mechanisms of persistent implementation failure. Center for Global Development working paper, (234).
[40] Ostrom, E. (2005). Policies that crowd out reciprocity and collective action. Moral sentiments and material interests: The foundations of cooperation in economic life, 253-275.
[41] Pritchett, L., Woolcock, M., & Andrews, M. (2013). Looking like a state: techniques of persistent failure in state capability for implementation. The Journal of Development Studies, 49(1), 1-18.
[42] Thynne, I., & Peters, B. G. (2015). Addressing the present and the future in government and governance: Three approaches to organising public action. Public Administration and Development, 35(2), 73-85.
[43] Argyris, C. (2004). Double‐loop learning and organizational change: facilitating transformational change. Dynamics of organizational change and learning, 389-402.
[44] Schön, D. A., & Argyris, C. (1996). Organizational learning II: Theory, method and practice. Reading, MA: Addison-Wesley.
[45] Patton, M. Q. (2011). Essentials of utilization-focused evaluation. Sage Publications.
[46] Olivier de Sardan, J. P. (2014). La manne, les normes et les soupçons: Les contradictions de l’aide vue d’en bas. Revue Tiers Monde, (3), 197-215.
[47] Grindle, M. S., & Thomas, J. W. (1991). Public choices and policy change. Johns Hopkins University Press.
Cite This Article
  • APA Style

    Ba, A. (2026). Institutionalizing Evaluation as a Governance Capability: Evidence from Agriculture and Economic Policy in Africa. Journal of Public Policy and Administration, 10(1), 1-17. https://doi.org/10.11648/j.jppa.20261001.11

    Copy | Download

    ACS Style

    Ba, A. Institutionalizing Evaluation as a Governance Capability: Evidence from Agriculture and Economic Policy in Africa. J. Public Policy Adm. 2026, 10(1), 1-17. doi: 10.11648/j.jppa.20261001.11

    Copy | Download

    AMA Style

    Ba A. Institutionalizing Evaluation as a Governance Capability: Evidence from Agriculture and Economic Policy in Africa. J Public Policy Adm. 2026;10(1):1-17. doi: 10.11648/j.jppa.20261001.11

    Copy | Download

  • @article{10.11648/j.jppa.20261001.11,
      author = {Abdourahmane Ba},
      title = {Institutionalizing Evaluation as a Governance Capability: Evidence from Agriculture and Economic Policy in Africa},
      journal = {Journal of Public Policy and Administration},
      volume = {10},
      number = {1},
      pages = {1-17},
      doi = {10.11648/j.jppa.20261001.11},
      url = {https://doi.org/10.11648/j.jppa.20261001.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.jppa.20261001.11},
      abstract = {Public policy evaluation has gained renewed significance as African governments seek to reinforce accountability and improve development outcomes. The institutionalization of evaluation reflects a strategic effort to embed oversight within governance systems and respond to increasing demands for evidence use in Africa. The research examined how 28 countries formalized evaluation functions through legal instruments, administrative procedures, and organizational practices between 2010 and 2024. It focused on agriculture and economic policy, given their role in advancing structural transformation and governance reform. A structured documentary review applied a multidimensional framework grounded in institutional theory and political economy. Four core dimensions informed the analysis: legal mandates, normative alignment, cognitive uptake, and hybrid arrangements. The review covered 306 official documents, including development strategies, budget frameworks, and statutory texts drawn from planning and finance ministries, sectoral agencies, and recognized international repositories. Results revealed divergent national pathways. Some countries established evaluation systems anchored in statutory authority and integrated within planning or budgeting processes. Others relied on frameworks that lacked enforceable mandates or sustained institutional support, often shaped by external interventions. Regional patterns also emerged. Anglophone and Island States more frequently demonstrated operational alignment between evaluation and resource allocation. Francophone and Central African countries often emphasized legal form without consistent implementation. Hybrid systems appeared where normative intent coexisted with partial adherence or tactical resistance. The typology developed through the research identified embedded models with institutional depth, transitional frameworks with uneven alignment, and symbolic systems with limited operational traction. Sectoral integration and political sponsorship consistently acted as enabling conditions. Evaluation systems reinforced state capability when embedded within governance functions and aligned with domestic policy processes. African experiences challenge linear conceptions of evaluation development and reveal adaptive trajectories rooted in national priorities and evolving administrative contexts. The research contributes to a deeper understanding of evaluation institutionalization as a dynamic process shaped through interaction between state capacity, governance reform, and evidence use in Africa.},
     year = {2026}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Institutionalizing Evaluation as a Governance Capability: Evidence from Agriculture and Economic Policy in Africa
    AU  - Abdourahmane Ba
    Y1  - 2026/01/07
    PY  - 2026
    N1  - https://doi.org/10.11648/j.jppa.20261001.11
    DO  - 10.11648/j.jppa.20261001.11
    T2  - Journal of Public Policy and Administration
    JF  - Journal of Public Policy and Administration
    JO  - Journal of Public Policy and Administration
    SP  - 1
    EP  - 17
    PB  - Science Publishing Group
    SN  - 2640-2696
    UR  - https://doi.org/10.11648/j.jppa.20261001.11
    AB  - Public policy evaluation has gained renewed significance as African governments seek to reinforce accountability and improve development outcomes. The institutionalization of evaluation reflects a strategic effort to embed oversight within governance systems and respond to increasing demands for evidence use in Africa. The research examined how 28 countries formalized evaluation functions through legal instruments, administrative procedures, and organizational practices between 2010 and 2024. It focused on agriculture and economic policy, given their role in advancing structural transformation and governance reform. A structured documentary review applied a multidimensional framework grounded in institutional theory and political economy. Four core dimensions informed the analysis: legal mandates, normative alignment, cognitive uptake, and hybrid arrangements. The review covered 306 official documents, including development strategies, budget frameworks, and statutory texts drawn from planning and finance ministries, sectoral agencies, and recognized international repositories. Results revealed divergent national pathways. Some countries established evaluation systems anchored in statutory authority and integrated within planning or budgeting processes. Others relied on frameworks that lacked enforceable mandates or sustained institutional support, often shaped by external interventions. Regional patterns also emerged. Anglophone and Island States more frequently demonstrated operational alignment between evaluation and resource allocation. Francophone and Central African countries often emphasized legal form without consistent implementation. Hybrid systems appeared where normative intent coexisted with partial adherence or tactical resistance. The typology developed through the research identified embedded models with institutional depth, transitional frameworks with uneven alignment, and symbolic systems with limited operational traction. Sectoral integration and political sponsorship consistently acted as enabling conditions. Evaluation systems reinforced state capability when embedded within governance functions and aligned with domestic policy processes. African experiences challenge linear conceptions of evaluation development and reveal adaptive trajectories rooted in national priorities and evolving administrative contexts. The research contributes to a deeper understanding of evaluation institutionalization as a dynamic process shaped through interaction between state capacity, governance reform, and evidence use in Africa.
    VL  - 10
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Business Science Institute, Iaelyon School of Management, Lyon, France

    Biography: Abdourahmane Ba, Statistician Engineer (ESEA-Dakar) and Doctor of Business Administration (BSI–IAE Lyon 3 Jean Moulin), has over 20 years of experience in public policy, evaluation, MEL systems, and development program management. He has led major pro-grams and studies across Africa. A published researcher, he has authored peer-reviewed articles and books on MEL effective-ness, data quality, and policy evaluation. His expertise combines advanced analytics with institutional insight to inform deci-sion-making, support reform, and advance inclusive development. Dr. Ba is widely recognized for strengthening learning systems and driving evidence-based public policy. He lives in Dakar, Senegal, at Villa 789, Grand Mbao. https://www.linkedin.com/in/dr-abdourahmane-b-83275715a/

    Research Fields: Monitoring and evaluation system, Knowledge management and evidence-based decision making, Development program evaluations, Public policy evaluation, Third-Party monitoring in constrained settings, Data quality management, Eco-nomic growth, Education.

  • Abstract
  • Keywords
  • Document Sections

    1. 1. Introduction
    2. 2. Research Objective and Questions
    3. 3. Literature Review
    4. 4. Materials and Methods
    5. 5. Results
    6. 6. Discussion
    7. 7. Conclusions
    Show Full Outline
  • Abbreviations
  • Author Contributions
  • Data Availability Statement
  • Conflicts of Interest
  • References
  • Cite This Article
  • Author Information