At the end of the planning and implementation process of each project, it is necessary to evaluate the overall impact of the intervention in order to assess the degree of success or failure. The final, or summative, evaluation has this scope i.e., ability to measure the level of intended change brought about as a result of the project activities.
Once all the activities of the communication programme have been implemented there is the need to see what have been their practical effect in the field. What change was brought about by the communication strategy being implemented? The aim of the final evaluation is to measure the impact caused by the project intervention in relation to the set objectives. This is different from monitoring as an evaluation is conducted mainly for purposes of measuring the final results of the entire process, rather than the progress of the project. A project that does not evaluate, properly, the results of its activities cannot be of any use for eventual duplication of experiences met. Evaluation is not only useful to assess how well the strategy worked but also for assessing how it has benefited the community. It is only a valuable instrument for assessing the effectiveness of the strategy implemented if it can be eventually improved, adapted and utilised in other projects and programmes.
In this handbook the summative evaluation has been divided into two types just like it was done for monitoring: quantitative and qualitative. The former is concerned with objective, verifiable measurement related to the project objectives. The latter, instead, measures the degree of success of the project activities as perceived by the community. The two should ideally coincide, but this is not always the case. In case of sharp differences between the two different evaluations you might consider investigating why this should happen. If the quantitative evaluation, in the form of a baseline study, shows that the project successfully reached its objectives while the participatory assessment indicates that people do not perceive any benefit from the project, you need to look into the matter. There could be a number of reasons for the disparity, e.g., the objectives of the project were not the right ones for the expected solutions, or the perceptions of the problems between insiders and outsiders were radically different. Whatever the reason, the final evaluation is supposed to give you a comprehensive and consistent picture of the results of the project intervention.
Change cannot be measured in absolute terms, this is to say that, if you want to measure accurately the impact of your project, you need to measure the situation before and after your intervention. If you want to know how far you have walked over a distance, you need to know where you started. The difference between the point of arrival and the starting point will give you the distance you have covered. Similarly, in your communication programme you need to measure first the level of awareness or knowledge before implementing the strategy. After having implemented the activities of your communication strategy you will measure again the level of awareness or knowledge. The difference between the two levels will give you a clear indication of the degree of change brought about by the communication activities (assuming there are no significant external factors).
By now it should be clear that in order to assess the degree of change brought about by the communication intervention, you need to have a starting point against which to measure any eventual change. The baseline survey mentioned at the beginning fulfils this function. As the word baseline suggests, it provides objectively verifiable data necessary to show the quantitative dimension of the problem to be addressed, thus providing the needed term of reference. Traditionally baseline surveys are conducted before any other activity of the programme has started in order not to bias the results.
In the Action Program, however, the baseline survey takes place after the PRCA. This innovation has been adopted because very often the area measured by the baseline in the former situation is always different from the priority areas identified with the community. A baseline survey carried out before a PRCA would for instance, try to measure the AKAP on building VIP latrines when in actual fact the real problem was that people did not see the need to have VIP latrines. In such a case the baseline should really be measuring factors affecting the AKAP concerning health and hygiene. The baseline would be more useful after the PRCA, even if it is at the risk of having some data contamination. In this way the baseline is more likely to measure exactly the priority areas of specific relevance. Furthermore it can also be used to validate and con organization the PRCA findings, besides quantifying them.
In chapter 5 of the PRCA handbook, there is a guide on how to design a baseline survey. At this point of the strategy you should remember that you have to evaluate the impact of the project activities through a post-implementation baseline survey compatible with the baseline carried out during the field research. Even if the baseline is only part of the overall summative evaluation (Participatory Impact Assessment is the other major component) it is a very important part, since project management, donors and international organisations are usually very sensitive to accountable, sound figures. The baseline survey should provide scientific, tangible and verifiable hard evidence showing that the communication intervention has brought some significant improvement.
Quantitative evaluation may be objective and scientific but in some cases it may overlook the most important issue in development: the human factor. The degree of satisfaction of the community should be equally important as the rate of adoption of a certain innovation, even if it is not so easy to assess. Participatory Impact Assessment - PIA - is supposed to measure the perceptions of the results of the communication intervention and the degree of satisfaction of the community. PIA, unlike the baseline survey, is not concerned with measuring objective scientific results, but the impact of the project as perceived by the community. Ideally the two should be consistent with each other.
The impact assessment is carried out through a series of participatory techniques and tools similar to those used in the PRCA. In evaluating the project impact you have to make sure that the community identifies in advance the indicators for the problems that are to be addressed (usually originating from the problem tree) jointly with the project staff. In this way you are sure that the objectives are appropriate and relevant for insiders and outsiders, i.e., the community and the project staff. Using participatory techniques and tools, your team and the community, have to go through the following steps:
Based on the above questions, you could also make a plan to make sure that the evaluation activities are carried out properly. The purpose of PIA is to make sure that the evaluation is not a theoretical exercise for a few experts but a comprehensive measurement that includes the community's perceptions and concerns. Once the “what are you going to evaluate” has been defined you and the other members of the evaluation team need to decide how. Go to the PRCA toolbox in the PRCA Handbook, and use the most appropriate techniques and tools designed to involve people in the whole process (from choosing appropriate indicators to assessing the final result).
Once the quantitative and qualitative evaluations have been carried out, the results should be combined to form a comprehensive study, assessing the results and the change brought about by the communication intervention. The major point you should keep in mind, when you present the results of the evaluation, is to show what has been the direct benefit/improvement that has been caused by communication. If you write a specific report on the evaluation of the communication component (or even of the project) you could follow a number of formats. The one usually used in the Action Programme is divided into six major areas as outlined below:
Needless to say, you can adopt whatever format you feel confident with when presenting the findings of the evaluation. The important thing to remember is that you must always consider whom your audience is. When you present findings consider the most important points you are to put across. Here again SAF as given below, can assist you in organising your findings.
Worksheet 4
SAF in the Evaluation of Communication Impact.
Topics/Results to be Measured | Indicators | Means of Verification (for each indicator) |
External Factors |
Quantitative Evaluation (of the Impact in relation to the Objectives) Participatory Evaluation (of the Impact in relation to the Objectives) Relevant Inputs |
The project and the communication objectives are not the only elements that can be evaluated. You might also be interested in evaluating a specific technology, social processes or the level of participation enjoyed by the project, even if these may not be directly considered in your objectives. You should therefore also be aware that the evaluation, even if it is done at the end of the whole process, is not necessarily the very last activity of the project. Based on its findings and recommendations, the project could be extended in order to take corrective measures to further improve the final outcome. Further corrections, modifications or adaptations suggested in the evaluation could be considered in order to improve the effectiveness of the strategy implemented when, and if, a similar project is to be replicated in other circumstances.