Quality Assessment of Evaluation Reports
How do we Evaluate Programmes?
Template: Evaluation Report
Project management methods and quality standards
In a decentralized, use-oriented evaluation system there is a natural tension between evaluation’s role in supporting program learning and demonstrating accountability for achieving results. In order to maintain the balance between learning and accountability in such a system, it is neither feasible nor desirable to standardize quality through the creation of a single format to which all evaluation reports must adhere. Instead, the Evaluation Unit judges the quality of an evaluation report based on the degree to which it demonstrates that the evaluation has fulfilled the purpose for which it was conducted using 4 internationally recognized program standards: utility, feasibility, propriety, and accuracy
How is Quality assessed?
According to the African Evaluation Association, these 4 quality enhancement standards are intended to help ensure that an evaluation will:
serve the information needs of intended users and be owned by stakeholders (utility);
be realistic, prudent, diplomatic, and frugal (feasibility);
be conducted legally, ethically and with due regard to the welfare of those involved in the evaluation as well as those affected by its results (propriety); and,
reveal and convey technically adequate information about the features that determine worth or merit of the program being evaluated (accuracy).
For more information on the program evaluation standards see http://www.afrea.org/keydoc.htm, http://www.eval.org/EvaluationDocuments/progeval.html, and http://www.wmich.edu/evalctr/jc/
Table 1. Questions Guiding the Quality Review of Evaluation Reports |
|||
1. UTILITY |
2. FEASIBILITY |
||
1.1 Were the users identified?1 Yes No 1.2 Were the uses identified? Yes No 1.3. Did the report describe how users participated in the evaluation process?2 Yes No |
Who were the identified users? Comments? What was the planned use? Comments? How did users participate? Comments? |
2.1 Were the evaluation issues/questions identified? Yes No 2.2 Given what could have been done in the evaluation, was the design of the evaluation adequate to address those issues/questions? (e.g. resources allotted, timing, perspectives represented, information sources consulted) Yes No Insufficient detail to assess |
What were the evaluation issues? Comments? If no, in what way was the design inadequate? Comments? |
3. ACCURACY |
4. PROPRIETY |
||
3.1 Given what was actually done in the evaluation, did the evaluation use appropriate tools and methods? Yes No Insufficient detail to assess 3.2 Did it apply the tools and methods well? Yes No Insufficient detail to assess 3.3 Is the evidence presented in the report? Yes No 3.4. Overall, does the evidence substantiate the conclusions/ recommendations? Yes No |
If no, in what ways were the tools and methods inappropriate? Comments? If no, how were the tools and methods inappropriately applied? Comments? Comments? Comments? |
4.1 Was there an expressed intent to enhance the evaluative capacity of the user(s) of the evaluation as a result of this evaluation?Yes No 4.2 Was there an expressed intent to enhance the evaluative capacity of those being evaluated as a result of this evaluation?Yes No 4.3 Did any of the content of the evaluation report raise ethical concerns? Yes No 4.4 Was this evaluation a part of the PI, Secretariat, or Corporate Project’s evaluation plan? Yes No |
What was the intent? What was the result? Comments. What was the intent? What was the result? Comments? If yes, what are those concerns? Comments? Why? Why Not? |
The reviewer of an evaluation is guided by two sets of related questions that are designed to elicit information about each of the four dimensions of evaluation quality (see Table 1 opposite). One set of questions asks for a yes or no response with respect to whether or not the report contains elements considered to be essential parts of a good evaluation. A corresponding set of questions directs the reviewer to consider and record precisely how those elements are addressed in the report.
These two complementary sets of questions generate two different kinds of data about a particular report: the first set of questions refer to the presence or absence of elements essential to a quality evaluation; the second set of questions direct the reviewer to carefully consider the reasons for a yes or no answer, generating descriptive information about how those elements are, or are not, addressed in a given report. Together, they provide data that are useful for identifying and analysing issues that may be affecting the quality evaluation throughout the center.
How is information about the Quality of Evaluation Used?
To provide feedback to programming units on their evaluation activities
In the Evaluation Unit’s annual reporting to ...
What other information about evaluation reports does the Evaluation Unit collect?
Other resources
DAC: evaluation standards