Outcome/impact evaluation : Impact Assessment
How do we Evaluate Programmes?
See also definition of impact and integrate with - How shall we have an impact? and see examples of Impact level indicators
A good programme would have defined an expected impact and the modality to achieve such an impact. An impact assessment will see if the programme logics is/was sound, how far that impact was achieved and if the project/programme activities and their results have generated other unforeseen consequences producing a positive or negative impact.
Vrinda here please re-write the following para to keep them logically related to the lines above also verify the coherence with the links above.
Definition of impact assessment
- assessing changes that have occurred in the lives of the intended beneficiaries , and the different forces and influences that have contributed to bringing about these changes. These may be project-related or wider forces and influences. Impact on other people should also be considered. The changes occurring may be positive or negative, intended or unintended. The impact may differ for women and men, people of different ages, different ethnic groups and other social groupings, and the analysis should consider different groups separately.
· Environmental impact - an assessment of how the natural environment and resources have been affected (both positively and negatively) as a result of the project or programme intervention.
Link between Evaluation Criteria and the Logframe
In programme cycle management, evaluation is the last stage of each ending cycle and the first one of each new cycle.
Impatto: giudica l'effetto del progetto sul contesto (ambiente) più ampio e il suo contributo alle politiche del settore e ad altre attività programmatorie (come definite negli obiettivi generali).
· Quali cambiamenti ha prodotto il progetto?
· Quali ullteriori cambiamenti si sono verificati dopo la fine del progetto in conseguenza dei risultati raggiunti dal progetto?
· ecc.
La rilevanza: la conformità degli obiettivi del progetto ai problemi che si propone di risolvere, la pertinenza del progetto alle politiche del settore e l'adeguatezza all'ambiente entro cui il progetto si realizza.
An Impact Assessment evaluates programme effectiveness in terms of end-results and compares with what the programme expected to achieve, so there must be an ex-ante assessment of a “wish list” and the logicality of expectations before the ex-post assessment is conducted. The impact Assessment report must include:
1 Report scope Information needs of readers. what is this report meant for; who are the expected readers (primary target and secondary targets); what are their information needs; how were these needs assessed; how have the report editors tried to satisfy these needs; what information needs has the report succeeded in satisfying and why.
Defining Impact. Definition of what an “impact” is and its correlation to outcomes and results of project activities; attribution of the chain of primary responsibilities within the the organization structure to ensure that the chain of results leads from project outputs to outcomes and impacts. Information sources. Who are the editors of this report; How they collected and edited the information; How data was collected and organized;
Standardising Indicators. Description of the reasons why the organization is trying to define a common set of impact indicators and reporting templates to be used throughout the organization programme cycle management activities in the various countries and sectors
ex-ante projections Programme expectations. What programmes were implemented during the time frame considered in this report; what impact was the organization expected to achieve through them; how were projects supposed to be designed/chosen within the program frameworks? what was the scale of activities that programme designers were expected to mobilize (in terms of countries, number of projects, number of internal stakeholders, number of direct beneficiaries, financial recourses raised and utilized).
ex post evaluations
Then the report can proceed to describe what was done and achieved, comparing it with what was expected
Indicators identified and targets achieved. Analysis of the presence/absence of impact indicators in the regional/country programmes and how far it was possible to add up the various programme evaluations in a single coordinated impact assessment;
Impacts achieved. what was achieved with programmes compared with what we were planning to achieve; how much of the expected impact was achieved? what other changes has brought about in poor people’s lives, knowledge, attitudes, behaviours, practices? To what extent have these programmes been given the possibility of alliances and partnerships that where coherent with the organization’s mandate? What are the product/services delivered by projects that have been demonstrated to have stronger links in contributing to outcomes and impacts? What are these strong links dependent upon? What are the activities that, even if performed well and have produced the expected outputs, were not successful in delivered by projects that have demonstrated to have stronger links in contributing to outcomes and impacts? What are these strong links dependent upon?
Lessons learnt. Have the programmes being evaluated in such a manner as to reveal lessons learnt concerning efficacy and efficiency of project/programme management? How are these lessons learnt used? Are they leading to corrective measures? Are they leading to training and employee development actions? How is the organization organized so that the tools and the other knowledge resources produced in a region can be made available to other regions so as to capitalize and share the knowledge produced through lessons learnt? Do lessons learnt reveal the functional links between beneficiary needs and the organization responses, between target group rights and programme impact; between actions done and project deliverables; between project deliverables and expected programme outcomes? And between programme outcomes and the expected impact? Are we learning how to measure each one of these categories of objectives and are we able to improve the measurement of factors correlating them?
Collection and utilisation of information. Is the collection and utilization of such recourses shared with the partners? What is the role of the organization partners in contributing to achieve the expected impact? Does the organization publicly recognise their specific role? Do the partners feel gratified in the way the organization proposes and manages its partnership with them?
How can Impact be measured?
This is an issue that belongs more to “planning” than to “reporting”. A plan of programmes meant to produce expected impact would specify what is the “need” or the “demand” for that impact; so it should state how it measured (or perhaps would be better to say “perceived” or “acknowledged”) the demand for such an impact. While evaluating the impact achieved the best is always to replicate the process by which the need was documented ex ante and see if there are changes ex post. Planners should also identify ex ante the indicators so that the implementers at country level can monitor accordingly and the editors of this report can sum up the various country evaluations into a coherent whole. Through lessons learnt can however evaluators verify the adequacy of the chosen indicators and suggest (to planners) new indicators.
Planners, while setting the expected impact, should consider ex ante the need of ex post evaluations. Life quality cannot be measured quantitatively, that’s why we have “indicators”. But these indicators would be:
logically related to the expected quality
quantifiable
measurable.
related to factors that the organization action can influence.
So for instance the number of trained people and the days of training does not really say how much the community benefited of that training, but still says a lot about the capacity of that programme to generate training activities.
In the case of impact one needs to be careful, because the relationship between the organization programme management and impacts achieved is contributive and not attributive. Many external stakeholders take an active role in producing these impact, often in a much more strategic manner than the organization. So in the case of impact we should look at how “influential” or “educative” were the project for improving the way the various stakeholders cooperate in reducing the ignorance and conflicts that generate poverty and exclusion.
Besides the efficacy indicators related to project outputs and programme outcomes, more specific impact indicators may include:
Knowledge resources produced, shared, used;
Number of persons empowered in development programme management (including internal stakeholders belonging to the target region) Development networks, round tables, committees etcetera generated or participated that deal with the area tackled by strategic change objectives,
Contribution given (directly or thorough the created networks) to policy design, implementation and evaluation,
Contribution given to collect data and other forms of documentation regarding the needs and the rights of target populations.
Contribution given to the empowerment of partners in designing, implementing and evaluating projects and programmes in the focus areas targeted by the programmes
a good effort to tackle the thorny issue of “how to evaluate the impacts”
a critical assessment of programmes
through case stories, it gives a living images of what happens on the ground
it is descriptive and brings across voices of the poor
it makes a good effort demonstrates how specific on-ground programmes have contributed to advocacy
describes the various components of advocacy activities conducted by the organization
sincere in making an effort to reveal some issues and linkages in why things did or didn’t work
communicates the “star” programmes- those which have been successful
it is based on beneficiary feedbacks and reveals their point of view
tries to put forward both the perspective of external stakeholders and the point of view of the organization teams
statement of the report scope could be clearer
does not reveal sufficient information on who and where of documents on which the report was based
definition of how “impact” is conceived and how it relates to what is existing could be better explained
limits that separate objectives, outcomes, outputs and results are blurred
it communicates the value of work done by the organization but it sounds self-justificatory and and it leaves a doubt on the distinction between those who evaluated and those whose work was evaluated
it does not confront situations before and after program implementation,
the information needs of the reader could have been better analysed and considered
more “revealing” of “who we are” and “what we have done”rather than “communicating what our partnership” with donors, sponsors, etc. has achieved
it could have focused better on the follow up of lessons learnt from previous impact reports
it tries to show attribution relationship where probably the organization has only contributed to an impact
lack of quantitative information across programmes to measure results achieved.
data is not comparable over programmes because change parameters and evaluation questions asked are different across programmes and within single programmes under an SCO
data is not comparable with what is available beyond programmes
it could have captured better the extent and depth of change that has happened in people’s lives and how these people have acted as catalysts in the lives of the wider community
there is no mention of indicators against which impact has been evaluated – to give quantitative evidence of qualitative changes
reveals a variety of work done and good work done but is unable to document efforts done for inter-linkages and contributions between SCOs
it does not collect and communicate products and tools that others can use, thereby helping others also in achieving effectiveness and efficiency
it does not produce templates, guidelines, checklists, etc. that can be used by others in facilitating and improving programmes.
Approfondimento: Obiettivi di funzionamento e obiettivi di esito: una distinzione fondamentale per la valutazione istituzionale Nella programmazione distinguiamo due classi di obiettivi: 1. Gli obiettivi di funzionamento: indicano la qualità che si intende raggiungere nell'organizzazione del servizio. 2. Gli obiettivi di esito: indicano le finalità da raggiungere con il servizio. Rispetto a queste due classi di obiettivi la programmazione assume due ruoli differenti: 1. Rispetto agli obiettivi di funzionamento, il programmatore assume un ruolo di guida: decide le strategie e fornisce indicazioni gestionali; si rivolge ad entità che recepiscono queste indicazioni, le interpretano e le considerano come modelli ai quali conformare la propria operatività. 2. Rispetto agli obiettivi di esito, la programmazione assume un ruolo strumentale: il programmatore cerca di conformarsi ad una serie di indicazioni ricevute in modo da creare valore per i beneficiari; sotto questo aspetto il programmatore cerca di "interpretare" ma non "dirigere" le esigenze dei beneficiari. Le due classi di obiettivi necessitano di due diversi tipi di valutazione: 1. Per verificare il raggiungimento degli obiettivi di funzionamento occorre basarsi su precisi parametri gestionali, ancorati all'operatività, che diano informazioni precise riguardo il grado di efficienza ed efficacia dei processi e permettano di verificare i fattori responsabili del mancato raggiungimento degli obiettivi di funzionamento (inefficienza/inefficacia delle strutture operative o mancato realismo nella definizione degli obiettivi programmati); 2. Per verificare il raggiungimento degli obiettivi di esito, occorre fare una valutazione d'impatto: occorre cioè analizzare i risultati ottenuti assieme ai "beneficiari" dell'azione e agli altri stakeholder, cioè coloro i quali direttamente o indirettamente sono influenzati dall'impatto dei cambiamenti avvenuti in base al raggiungimento / mancato raggiungimento degli obiettivi. Valutazione nella programmazione aziendale Nelle attività delle aziende profit, tipicamente i beneficiari della programmazione sono gli azionisti e il beneficio fornito può essere quantificato in maniera oggettiva rispetto a parametri finanziari, desumibili dalla lettura comparata dei bilanci di gestione e del valore azionario dell'impresa. Valutazione nella programmazione istituzionale Il controllo di gestione delle aziende pubbliche e private può basarsi su modelli analoghi, la differenza fondamentale invece riguarda la valutazione degli esiti. Nelle attività di enti finalizzati al bene comunitario la valutazione degli esiti non può conformarsi ad un parametro finanziario oggettivo, perché il cittadino è contemporaneamente "azionista" dell'ente e "cliente" del servizio. In questo caso occorre incrociare i dati sui costi/benefici: i costi possono avere una quantificazione numerica, ma i benefici devono essere valutati in termini di "salute". Per verificare l'impatto di una politica per la salute occorre pertanto avere un sistema in grado di: • fornire un'analisi degli "indicatori di salute", • evidenziare lo stato delle "risorse della salute", • fornire una fotografia del "capitale di salute" della comunità. Questa "fotografia" è necessaria ai programmatori per interpretare le esigenze dei beneficiari e riformulare gli obiettivi di funzionamento del sistema sanitario in base agli obiettivi di salute della comunità. |
see aslo Subsidiarity
EU impact assessment guidelines
Learning lessons Asia development bank
Vrinda
To evaluate a programme, to say what value it has had for the target communities, we need to be able to produce sound evidence of exactly what impact the programme has had on their lives. This may be the impact that we planned that it would have, but there may be others, positive or negative.
The term commonly used to talk about this evidence is "impact indicators". These are signs (facts, figures, statistics, collections of personal testimonies etc) that indicate what effect, or impact the programme is having.
However, a common mistake is to measure only whether the planned activities took place, or their immediate outcomes. But measuring activities actually only indicates what processes we have used to try to achieve a longer term impact - these are process indicators. They don't tell us if the programme has had the desired impact.
Activities have outcomes, which may or may not lead to the desired impact. For example, an outcome of activities like training community members in how to reduce the risk of becoming infected with HIV, may be that people start buying more condoms. The sales figures for condoms in the area and people's reports in surveys and questionnaires that they are buying and using them indicate that the activities have had this outcome - these are outcome indicators. They don't tell us if the programme has had the desired impact either.
The impact is only achieved if the condoms are used correctly and consistently and this leads to a measurable reduction in HIV infections. Statistics at STI clinics, surveys and questionnaires may provide the evidence that will indicate to what extent this impact has been achieved - these are impact indicators.