Article Information

Author:
Mahesh Patel

Affiliation:
AfrEA Board Member, UNICEF Regional Adviser

How to cite this article: Patel, M., 2013, ‘African Evaluation Guidelines’, African Evaluation Journal 1(1), Art. #51, 5 page. http://dx.doi.org/10.4102/
aej.v1i1.51

Copyright Notice:
© 2013. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
African Evaluation Guidelines
In This African Evaluation Guidelines...
Open Access
Background and History
Structure of the AEG
Examples of necessary Modifications
   • Utility Guideline 4: Values Identification
   • Utility Guideline 6: Report timeliness and Dissemination
   • Feasibility Guideline 2: Political Viability
   • Propriety Guideline 2: Formal Agreements
   • Propriety Guideline 4: Human Interaction
   • Propriety Guideline 6: Disclosure of Findings
   • Accuracy 1: Programme Documentation
   • Accuracy 2: Context Analysis
   • Accuracy Guideline 5: Valid Information
Discussion: How much was changed, and why?
Conclusions and next steps
Annex 1: The African Evaluation Guidelines: 2002
   • Utility
      • U1 Stakeholder identification (modified)
      • U2 Evaluator credibility
      • U3 Information scope and selection
      • U4 Values identification (modified)
      • U5 Report clarity
      • U6 Report timeliness and dissemination (modified)
      • U7 Evaluation impact
   • Feasibility
      • F1 Practical procedures
      • F2 Political viability (modified)
      • F3 Cost effectiveness (modified)
   • Propriety
      • P1 Service orientation
      • P2 Formal agreements (modified)
      • P3 Rights of human participants (modified)
      • P4 Human interaction (modified)
      • P5 Complete and fair assessment
      • P6 Disclosure of findings (modified)
      • P7 Conflict of interest
      • P8 Fiscal responsibility
   • Accuracy
      • A1 Programme documentation (modified)
      • A2 Context analysis (modified)
      • A3 Described purposes and procedures
      • A4 Defensible Information sources (modified)
      • A5 Valid information (modified)
      • A6 Reliable information
      • A7 Systematic information
      • A8 Analysis of quantitative information
      • A9 Analysis of qualitative information
      • A10 Justified conclusions
      • A11 Impartial reporting
      • A12 Meta-evaluation
Background and History

The African Evaluation Guidelines (AEG) is a list of 30 aspects of good quality evaluation, presented in the form of a checklist that can be used to assess and improve the quality of evaluations. It is based on the Programme Evaluation Standards (PES)1 used by the American Evaluation Association (AEA). Many of the items in the PES were based on North American practices and culture, and needed to be adapted to develop a checklist for quality evaluation that was suited to African conditions and culture.

A committee was formed in 1998, consisting of 14 African Evaluation Associations (AfrEA), to review and adapt the PES. Each of these associations held at least one meeting to discuss the PES and propose adaptations to the secretariat of the AfrEA, based in my office in UNICEF in Nairobi. All these proposed modifications were compiled and then presented to a plenary session in the first AfrEA Conference in Nairobi during 1999. This AfrEA Conference suggested further modifications and that the AEG should be field tested.

From 2000 to 2002, the African Evaluation Guidelines (AEG) were field tested, initially in Zambia and Kenya, and later in eight other countries. A consolidated 4th version of the AEG was presented to a plenary session of the second AfrEA Conference in Nairobi in 2002. This version was adopted by the AfrEA plenary as a ‘working document’ to be continuously developed and improved. It was published in a special issue of the academic journal ‘Evaluation and Programme Planning’ in 20022, with co-authorship of the ten networks of evaluators in field test countries. A meta-evaluation of 14 evaluations made in Africa was included in the same journal issue, showing that excellent quality of work was being produced in Africa.3 The AEG were also translated into French by the Burundi Network.

Structure of the AEG

The AEG are structured in 4 broad sections, essentially the same structure as the Programme Evaluation Standards:

• Utility: the utility guidelines are intended to help to ensure that an evaluation will serve the information needs of intended users and be owned by stakeholders.

• Feasibility: the feasibility guidelines are intended to help to ensure that an evaluation will be realistic, prudent, diplomatic and frugal.

• Propriety: the propriety guidelines are intended to help to ensure that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results.

• Accuracy: the accuracy guidelines are intended to help to ensure that an evaluation will reveal and convey technically adequate information about the features that determine worth or merit of the programme being evaluated.

Examples of necessary Modifications

Examples of modifications adopted by the AfrEA are included below with the modifications presented in italics.

Utility Guideline 4: Values Identification
The perspectives, procedures, and rationale used to interpret the findings should be carefully described, so that the bases for value judgments are clear. The possibility of allowing multiple interpretations of findings should be transparently preserved, provided that these interpretations respond to stakeholders’ concerns and needs for utilisation purposes.

Utility Guideline 6: Report timeliness and Dissemination
Significant interim findings and evaluation reports should be disseminated to intended users, so that they can be used in a reasonably timely fashion, to the extent that this is useful, feasible and allowed. Comments and feedback of intended users on interim findings should be taken into consideration prior to the production of the final report.

Feasibility Guideline 2: Political Viability
The evaluation should be planned and conducted with anticipation of the different positions of various interest groups, so that their cooperation may be obtained, and so that possible attempts by any of these groups to curtail evaluation operations or to prejudice or misapply the results can be averted or counteracted to the extent that this is feasible in the given institutional and national situation.

Propriety Guideline 2: Formal Agreements
Obligations of the formal parties to an evaluation (what is to be done, how, by whom, when) should be agreed to through dialogue and in writing, to the extent that this is feasible and appropriate, so that these parties have a common understanding of all the conditions of the agreement and hence are in a position to formally renegotiate it if necessary. Specific attention should be paid to informal and implicit aspects of expectations of all parties to the contract.

Propriety Guideline 4: Human Interaction
Evaluators should respect human dignity and worth in their interactions with other persons associated with an evaluation, so that participants are not threatened or harmed or their cultural or religious values compromised.

Propriety Guideline 6: Disclosure of Findings
The formal parties to an evaluation should ensure that the full set of evaluation findings along with pertinent limitations are made accessible to the persons affected by the evaluation, and any others with expressed legal rights to receive the results as far as possible. The evaluation team and the evaluating institution will determine what is deemed possible, to ensure that the needs for confidentiality of national or governmental entities and of the contracting agents are respected, and that the evaluators are not exposed to potential harm.

Accuracy 1: Programme Documentation
The programme being evaluated should be described clearly and accurately, so that the programme is clearly identified, with attention paid to personal and verbal communications as well as written records.

Accuracy 2: Context Analysis
The context in which the programme exists should be examined in enough detail, including political, social, cultural and environmental aspects, so that its likely influences on the programme can be identified and assessed.

Accuracy Guideline 5: Valid Information
The information gathering procedures should be chosen or developed and then implemented so that they will assure that the implementation arrived at is valid for the intended use. Information that is likely to be susceptible to biased reporting should be checked using a range of methods and from a variety of sources.

1.Joint Committee on Standards for Educational Evaluation, 1994, The Program Evaluation Standards, Sage, Thousand Oakes, CA

2.Patel, M., 2002, ‘AfrEA Evaluation Networks. The African Evaluation Guidelines’, Evaluation and Program Planning. Special Issue, 25 (4), 481-492.

3.Patel, M., 2002, ‘A meta-evaluation, or quality assessment, of the evaluations in this issue, based on the African Evaluation Guidelines’, Evaluation and Program Planning, 25 (4), 329-332.

Discussion: How much was changed, and why?

Of the original 30 United States Programme Evaluation Standards (PES), 13 have so far been revised and 17 remain unchanged. Political and cultural considerations emerged as major driving forces behind the necessary modifications to make the AEG as relevant as possible for Africa.

Guidelines on ‘Political Viability’ (see Annex F2) and ‘Disclosure of Findings’ (see Annex P6) were both considered politically sensitive in some African countries – but not necessarily in all. The wording of the guidelines is a compromise between the proposals of countries with relatively open governments, freedom of press and generally participative political processes, and those with relatively autocratic governments or military dictatorships.

Cultural considerations were important in the wording of several guidelines, especially those relating to propriety. ‘Formal Agreements’ (see Annex P2), ‘Rights of Human Subjects’ (see Annex P3), and ‘Human Interactions’ (see Annex P4) all required modification, as did the one on ‘Defensible Information Sources’ (see Annex A4).

The guideline on ‘Valid Information’ (see Annex A5) was adjusted in consideration of cultural sensitivities to permit male-female interactions and to queries on topics such as sexual behaviour.

One example is the guideline on ‘Stakeholder Identification’ (see Annex U1), which was extended to pay explicit attention to the sometimes ignored beneficiaries at community level.

Finally, the English language itself is often considered to contain implicit cultural concepts and assumptions. The African Evaluation Guidelines have been translated into French in Burundi. It is anticipated that further translations into local languages will yield additional insights and modifications.

Conclusions and next steps

The current version of the AEG dates back to 2002. The development process over the period 1998 to 2002 involved a large number of evaluators all over Africa, as well as a formal process mediated by national evaluation networks and the AfrEA Conference. The period 2002 to 2013 has seen huge changes in the development profile of Africa, and even greater advances in the professionalism of African evaluators.

It seems likely that, if revisited in our current context, a number of further modifications of the 2002 version of the AEG would be required. Now that we have the African Evaluation Journal, and now that the AfrEA Conferences seem to have become a regular event, we have both a forum for discussing modifications, and a vehicle for communicating them across the continent. My proposal, to the membership of the AfrEA, the rest of the AfrEA Board and the editors of this journal, is that we embark on a process of updating the AEG, with formal involvement of national associations, mediated by the AfrEA Conference, with progress reports communicated in this journal.

Annex 1: The African Evaluation Guidelines: 2002

The AEG checklist should be used to assist in planning evaluations, negotiating clear contracts, reviewing progress and ensuring adequate completion of an evaluation.

Utility
The utility guidelines are intended to ensure that an evaluation will serve the information needs of intended users and be owned by stakeholders.

U1 Stakeholder identification (modified)
Persons and organisations involved in or affected by the evaluation (with special attention to beneficiaries at community level) should be identified and included in the evaluation process, so that their needs can be addressed and so that the evaluation findings are utilisable and owned by stakeholders, to the extent this is useful, feasible and allowed.

U2 Evaluator credibility
The persons conducting the evaluation should be both trustworthy and competent to perform the evaluation, so that the evaluation findings achieve maximum credibility and acceptance.

U3 Information scope and selection
Information collected should be broadly selected to address pertinent questions about the programme and be responsive to the needs and interests of clients and other specified stakeholders.

U4 Values identification (modified)
The perspectives, procedures, and rationale used to interpret the findings should be carefully described, so that the bases for value judgments are clear. The possibility of allowing multiple interpretations of findings should be transparently preserved, provided that these interpretations respond to stakeholders’ concerns and needs for utilisation purposes.

U5 Report clarity
Evaluation reports should clearly describe the programme being evaluated, including its context, and the purposes, procedures and findings of the evaluation, so that essential information is provided and easily understood.

U6 Report timeliness and dissemination (modified)
Significant interim findings and evaluation reports should be disseminated to intended users, so that they can be used in a reasonably timely fashion, to the extent that this is useful, feasible and allowed. Comments and feedback of intended users on interim findings should be taken into consideration prior to the production of the final report.

Evaluation impact
Evaluations should be planned, conducted and reported in ways that encourage follow through by stakeholders, so that the likelihood that the evaluation will be used is increased.

Feasibility
The feasibility guidelines are intended to ensure that an evaluation will be realistic, prudent, diplomatic, and frugal.

F1 Practical procedures
The evaluation procedures should be practical to keep disruption to a minimum while needed information is obtained.

F2 Political viability (modified)
The evaluation should be planned and conducted with anticipation of the different positions of various interest groups, so that their cooperation may be obtained, and so that possible attempts by any of these groups to curtail evaluation operations or to prejudice or misapply the results can be averted or counteracted to the extent that this is feasible in the given institutional and national situation.

F3 Cost effectiveness (modified)
The evaluation should be efficient and produce information of sufficient value, so that the resources expended can be justified. It should keep within its budget and account for its own expenditures.

Propriety
The propriety guidelines are intended to ensure that an evaluation will be conducted legally, ethically and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results.

P1 Service orientation
Evaluation should be designed to assist organisations to address and effectively serve the needs of the full range of targeted participants.

P2 Formal agreements (modified)
Obligations of the formal parties to an evaluation (what is to be done, how, by whom, when) should be agreed to through dialogue and in writing, to the extent that this is feasible and appropriate, in order for these parties to have a common understanding of all the conditions of the agreement and hence are in a position to formally renegotiate it if necessary. Specific attention should be paid to informal and implicit aspects of expectations of all parties to the contract.

P3 Rights of human participants (modified)
Evaluation should be designed and conducted to respect and protect the rights and welfare of human subjects and the communities of which they are members. The confidentiality of personal information collected from various sources must be strictly protected.

P4 Human interaction (modified)
Evaluators should respect human dignity and worth in their interactions with other persons associated with an evaluation, so that participants are not threatened or harmed or their cultural or religious values compromised.

P5 Complete and fair assessment
The evaluation should be complete and fair in its examination and recording of strengths and weaknesses of the programme being evaluated, so that strengths can be built upon and problem areas addressed.

P6 Disclosure of findings (modified)
The formal parties to an evaluation should ensure that the full set of evaluation findings along with pertinent limitations are made accessible to the persons affected by the evaluation, and any others with expressed legal rights to receive the results as far as possible. The evaluation team and the evaluating institution will determine what is deemed possible, to ensure that the needs for confidentiality of national or governmental entities and of the contracting agents are respected, and that the evaluators are not exposed to potential harm.

P7 Conflict of interest
Conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results.

P8 Fiscal responsibility
The evaluator’s allocation and expenditure of resources should reflect sound accountability procedures and otherwise be prudent and ethically responsible, so that expenditures are accounted for and appropriate.

Accuracy
The accuracy guidelines are intended to ensure that an evaluation will reveal and convey technically adequate information about the features that determine worth of merit of the programme being evaluated.

A1 Programme documentation (modified)
The programme being evaluated should be described clearly and accurately, so that the programme is clearly identified, with attention paid to personal and verbal communications as well as written records.

A2 Context analysis (modified)
The context in which the programme exists should be examined in enough detail, including political, social, cultural and environmental aspects, so that its likely influences on the programme can be identified and assessed.

A3 Described purposes and procedures
The purposes and procedures of the evaluation should be monitored and described in enough detail, so that they can be identified and assessed.

A4 Defensible Information sources (modified)
The sources of information used in a programme evaluation should be described in enough detail, so that the adequacy of the information can be assessed, without compromising any necessary anonymity or cultural or individual sensitivities of respondents.

A5 Valid information (modified)
The information gathering procedures should be chosen or developed and then implemented so that they will assure that the implementation arrived at is valid for the intended use. Information that is likely to be susceptible to biased reporting should be checked using a range of methods and from a variety of sources.

A6 Reliable information
The information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable for the intended use.

A7 Systematic information
The information collected, processed and reported in an evaluation should be systematically reviewed and any errors found should be corrected.

A8 Analysis of quantitative information
Quantitative information in an evaluation should be appropriately and systematically analysed so that evaluation questions are effectively answered.

A9 Analysis of qualitative information
Qualitative information in an evaluation should be appropriately and systematically analysed so that evaluation questions are effectively answered.

A10 Justified conclusions
The conclusions reached in an evaluation should be explicitly justified, so that stakeholders can assess them.

A11 Impartial reporting
Reporting procedures should guard against distortion caused by personal feelings and biases of any party to the evaluation, so that evaluation reports fairly reflect the evaluation findings.

A12 Meta-evaluation
The evaluation itself should be evaluated in a formative and summative manner against these and other pertinent guidelines, so that its conduct is appropriately guided and, on completion, stakeholders can closely examine its strengths and weakness.


Crossref Citations

No related citations found.