Surveys
How do we Evaluate Programmes?
Vrinda here please re-write the following section using the language of development workers and keeping them logically related to the other chapters
rather than classifications we need to say how they contribute to the programme cycle management and to organizational development
Statistical surveys are used to collect quantitative information about items in a population. Surveys of human populations and institutions are common in political polling and government, health, social science and marketing research. A survey may focus on opinions or factual information depending on its purpose, and many surveys involve administering questions to individuals. When the questions are administered by a researcher, the survey is called a structured interview or a researcher-administered survey. When the questions are administered by the respondent, the survey is referred to as a questionnaire or a self-administered survey.
A survey carried out before the project activity start is called baseline survey.
What is a survey?
Surveys (sometimes referred to as questionnaires) are often used in evaluation to measure attitudes, opinions, behaviour and life circumstances such as income, family size, housing conditions etc. Most surveys solicit relatively structured responses and are typically analysed statistically.
When to use a survey
Surveys are most suited to answering questions such as ‘what’, ‘how many’ and ‘how often’. Although they can be used to answer ‘why’ questions, they tend to be less useful in doing so. They are therefore often used to gather basic data about a group of people. Surveys generally use a questionnaire and seek standard, quantifiable data from a representative population.
Types of survey
1. Structured
Structured surveys are precisely worded with a range of pre-determined responses that the respondent can select. Everyone is asked exactly the same questions in exactly the same way and is given exactly the same choices to answer the questions. They are hard to develop as you have to be certain you have covered all possible pieces of information, but are easier to complete, easier to analyse and more efficient when working with large numbers of people.
Example of a structured question: To what extent, if at all, has this workshop been useful in helping you to learn how to evaluate your programme? [Answers: little or no extent; some extent; great extent; very great extent; no opinion; not relevant]
2. Semi-structured
Semi-structured surveys ask the same general set of questions, but leave many, if not all, the answers open-ended. They are a little easier to develop and can provide a rich source of data, but can be labour intensive to conduct, harder to analyse, burdensome to complete and subject to bias in the interpretation.
Example of a semi- structured question: What are the three things you learned from the programme evaluation workshop that you have used on the job?
How to develop a survey
1. What to include
Try to keep the survey as short as possible, but ensure that all areas of interest have been covered. Start by defining the overall objectives of the study. Next, identify which sectors and issues are most important, and finally, write questions to study specific issues or programmes.
Example
Level |
Description |
Objective |
E.g. To understand the effects of government policies on households |
Which sectors are most important? |
E.g. The incidence of food price subsidies or the effect of changes in the accessibility of government education services |
Which issues are most important? |
E.g. The levels of enrolment in schools or poor attendance in schools? |
Questions |
E.g. Are textbooks readily available to students or how many girls have enrolled in school this term? |
Try to include questions that allow comparison of issues before and after the programme intervention. Simple ranking of people’s perceptions - for example of a change in their quality of life – allow comparisons to made and between groups and allow simple statistical analysis to be done.
Consider how to measure the responses to each question:
Attitudes can be measured using the Likert-scale. Responses are made on a ‘strongly agree’ to ‘strongly disagree’ continuum.
Behaviours are best measured with multiple-choice items (in which respondents select a behaviour) or with adverbs (e.g. always, frequently etc.).
Opinions can be obtained by using adjectives (e.g. excellent to poor).
Information about life circumstances (number of children etc.) can be measured with multiple-choice items (e.g. presenting numeric ranges) or short answer, open-ended items.
Open-ended items should be used sparingly unless the audience is quite motivated, as requiring much writing can greatly decrease the response rate. Nevertheless, open-ended items can provide useful additional information and give the respondents an opportunity to voice alternative views.
The survey should be accompanied by a set of clear instructions and cover letter that explains why the survey is being undertaken, how the participants were selected, why their participation is important and what will be done with the results.
2. Sequencing questions
To help the respondents feel comfortable about participating in the interview, ask factual questions before asking about more controversial matters. Try to ask questions about the present, before questions about the past or future. It is usually easier for them to talk about the present and then work into the past or future. Try to ensure that the questionnaire meets the following criteria regarding sequencing:
· Are later responses biased by early questions?
· Does the questionnaire begin with easy, unthreatening but pertinent questions?
· Are leading questions avoided?
· Is there a logical, efficient sequencing of questions (from general to specific)?
· Are the major issues covered thoroughly while minor issues passed over quickly?
3. Wording questions
The questionnaire should also meet the following criteria:
· Are the questions stated precisely (who, what, when, where, why, how?)
· Does the questionnaire avoid assuming too much knowledge on the part of the respondent?
· Does each item only ask one question?
· Is the respondent in a position to answer the question or must he make guesses?
· Are definitions clear?
· Are emotionally tinged words avoided?
· Are the methods for responding appropriate, clear and consistent?
4. Translations
In cases where one or more languages are needed to collect data, the surveys must be translated so that the target respondents can participate. Questions used in the surveys should always be worded in simple terms in the language that is most commonly spoken.
Survey approach
1. Telephone
Telephone surveys are preferable to mailed surveys when (a) there is a need for speed (b) respondents may be reluctant or unable to complete written surveys but can be reached by telephone and (c) the questions lend themselves to being answered over the phone. More open-ended questions can be used in telephone interviews as respondents are more willing to speak a sentence or even a paragraph than to write one. However, unlike a personal interviewer, the telephone interviewer has difficulty in establishing a rapport with the respondent due to the lack of eye contact and other non-verbal cues. Unlike mailed surveys, responses to telephone interviews are more likely to produce socially acceptable answers because the respondent is answering to a person in contrast to the anonymity provided by the paper.
2. Electronic and mailed surveys (self-administered surveys)
All self-administered surveys (i.e. those completed by the respondent) should be short – taking no longer than 30 minutes to complete. Since it is easier to tick a box than to write out answers (and because some hand writing can be hard to decipher), closed questions are best. Research shows that of all the approaches to carrying out a survey, people are most likely to give honest responses to sensitive questions when using a self-administered questionnaire. However, this approach can only be taken when the respondents have a reasonable degree of literacy, are motivated to complete the survey and where there is a good Internet access or postal service. As more and more people have access to e-mail, on-line surveys are becoming increasingly common. However, care must be taken to ensure that those who complete the survey are not systematically different from non-respondents because they are more likely to use computers for communication.
3. Face-to-face
Surveys are administered by an interviewer face-to-face for a variety of reasons: to gather information from clients who have literacy problems or may have difficulty understanding the questions, to stimulate or motivate responses, or to permit occasional probing by the interviewer to increase the quality of the response. However, carrying out surveys face-to-face is more costly than self-administered questionnaires. It’s important to try to match interviewers to critical characteristics of interviewees (gender, race, age, ethnicity etc.). The drawback of face-to-face interviews can by the tendency of the respondents to provide socially desirable responses because they are answering to a person.
Choosing respondents
Give careful thought to the choice and the number and type of respondents. Particular consideration should be given to how representative a sample is of a given population. A combination of project sites should be chosen to understand the particular impact of a programme intervention. However, it should be borne in mind that a scattered location of respondents may make the exercise very expensive and place additional stress on field survey teams. It is important to compensate for respondents who drop out of the survey. Some respondents will be unavailable during the survey period, so it is often worth ‘over-sampling’ to compensate for this. Guidance on how to come up with a representative sample of your population can be found in the MEL guidance sheet on sampling.
Field (pilot) testing
Once the survey is agreed upon, it should go through a field or pilot test with a small number of respondents to ensure that the questions are understandable, that they are put in a suitable way, that the length of the survey is appropriate and that the respondents are available. The questionnaire can then be modified before it is administered with a wider group of people. Addressing problems and mistakes at this stage is not only much easier and cheaper, but can make the difference between a successful or unsuccessful exercise. A good field test will look at the survey at three levels:
· As a whole (Are all the parts of the survey consistent; are there areas that ask the same question etc.?)
· Each section (Are the questions relevant for this section; do the questions together provide the intended information?)
· Individual questions (Is the wording clear; does the question allow for ambiguous responses?)
Training enumerators
Undertaking large-scale surveys using face-to-face interviews normally means having to hire interviewers or enumerators. The number of enumerators required will depend on the scope of the survey and the geographical area it covers. Typically, enumerators can carry out around five interviews per day. If survey interviews are to be carried out by several people, careful consideration should be given to training. Interviewers need to be trained in standardised methods for delivering the questions, including using probes, pauses and prompts, methods for recording the information and means for establishing and maintaining rapport.
Analysis
Simple statistical techniques – such as calculating average results for various categories of respondent - can be used to analyse quantitative data and presented as before and after data. Depending on the sample sizes, further analysis may be undertaken to assess the statistical significance of the results obtained (i.e. how likely is it that these results have been achieved by chance). In the survey report, use aggregate rather than individual responses to ensure that they are kept confidential.
Further information
· Roche C. (1999) Impact Assessment for Development Agencies: Learning to Value Change. Oxford: Oxfam
· Fitzpatrick et al. (2004) Program Evaluation: Alternative Approaches and Practical Guidelines. Pearson Education, Inc.