Promise and practice: participatory evaluation of humanitarian assistance

Donors, UN and other international organizations and NGOs are increasingly interested in using participatory and beneficiary-based methodologies in their evaluation processes.

This article is based on analysis of recent evaluation reports and consultation with evaluators and agency staff. It indicates that although many agencies have prepared best practice evaluation guidelines their use has not yet become common practice. This article is intended to contribute to a wider objective of generating recommendations for the field-testing of relevant and truly beneficiary-based evaluation methodologies.

Rethinking evaluation objectives

Following lessons learned from development studies, humanitarian actors are beginning to recognize that assessing the actual impact of their work is more valid than simply measuring output in material terms. Linked to this is a recognition not only that current evaluation practices do not always provide information useful to practitioners but also that the way in which evaluations are conducted may pre-determine the kind of information gathered. By implication it is being realized that incorporation of beneficiary perspectives into evaluation processes cannot, and should not, be done without broad rethinking of the objectives of evaluation.

While many of the lessons from development projects are relevant for humanitarian approaches to evaluation, there are clearly points of divergence. Some relate to the conventional modes of delivery of humanitarian assistance. Organizations like UNHCR are almost necessarily centralized and bureaucratic: a function of the political and economic framework within which they are obliged to operate, as well as their organizational culture. Alistair Hallam has noted that humanitarian assistance remains an essentially ‘top down’ process: “Humanitarian agencies are often poor at consulting or involving members of the affected population and beneficiaries ... there can be considerable discrepancy between the agency’s perception of its performance and the perceptions of the affected population and beneficiaries”.(1)

The objectives of humanitarian evaluations have hitherto related predominantly to institutional priorities. There has been no consideration that beneficiaries might have a role other than as recipients of improved assistance or that there might be value in the evaluation process for beneficiary populations.

Accountability has usually been conceived as upwards: to donors, trustees and other northern stakeholders. The need for downward accountability, or accountability to those receiving assistance, has only emerged in recent years. It is not clear this is achievable unless more attention is paid to beneficiary views at every stage of programme management. In UNHCR’s Planning and Organising Useful Evaluations (1998), however, UNHCR appears to take the emphasis off accountability as an objective, a move which risks losing the opportunity for downward accountability.

Institutional objectives are generally understood to be grouped around lesson learning and accountability. In respect of lesson learning within a programme, the timing of the evaluation is critical; at mid-term, changes to the programme can still be made while an end-term evaluation offers only the prospect of lessons for the future. It is a truism that there is a relationship between the kind of information sought in an evaluation and the methods used to gather it. The OECD has noted that “if lesson-learning is emphasized then it opens up the possibility for the extensive use of participatory methods. If accountability is emphasized then it implies structuring the evaluation so that its findings are independent and respected”.(2) Such a view encapsulates the widespread mistrust of the results of participatory research and reflects the assumption that evaluation should lead to the learning of a single truth.

From output to impact

Conventional evaluations have tended to employ a technical idiom which relies on establishing the extent to which fixed objectives have been achieved by implementers. A ‘scientific’ approach has been common, with evaluation teams mandated to investigate outputs in terms of resources controlled by the programme. Quantitative methods have generally been employed to do this, and have been preferred by donors and agency desk staff on the grounds of their assumed reliability and verifiability. This approach implies the desirability and possibility of establishing ‘facts’ and an objective ‘truth’.

Borrowing from evaluation criteria used in development studies, a new emphasis has been placed in some quarters on the assessment of impact of programmes. This implies a much more wide-ranging and inclusive focus and may represent the best forum for methodological innovation, including the increased participation of beneficiaries and others in evaluation processes.

Involving beneficiaries in research will entail addressing the fears that programme staff may have about evaluations. Their concerns about what evaluation results might mean for their work or careers may make them reluctant to relinquish the control they have in decision making and evaluation. Recognising the validity of staff fears of judgmental evaluations, organizations such as MSF Holland are explicitly attempting to re-orient evaluation to place a greater emphasis on learning rather than internal accountability. It is being suggested that both field staff and evaluators should be obliged to take responsibility for their work and that accountability and transparency should go hand in hand.

What the purpose of an evaluation is understood to be has implications for the extent to which beneficiaries are invited to participate. Evaluation is a political process which means different things to different actors. Involving beneficiaries in the evaluation of humanitarian assistance programmes implies that the evaluation objectives are wider than a straightforward attempt to measure programme outputs.

Any meaningful evaluation of assistance programmes requires analysis of both the socio-political economy inhabited by those affected by complex emergencies and the survival strategies they employ. Without beneficiary input, evaluation becomes counter productive. If it is accepted that impact assessment is desirable, beneficiaries must be involved in the process. Attempts to incorporate beneficiary voices have been frustrated when they operate within a framework which does not accept this. As anthropologist and evaluator on the Joint Evaluation of Emergency Assistance to Rwanda, Johann Pottier, asks:

How can I make them move beyond what they expect me to do, which is to have nice neat (apolitical) questions and bring back neat (apolitical) answers? The methodological challenge... is not how we can use shortcuts in research (e.g. by applying PRA techniques) but how we can improve on the questions we ask in the highly charged setting of complex political emergencies...Sitting down for as long as it takes, and knowing what questions to ask and how, must remain the principal strategy.(3)

 

Prescriptions for action: guidelines and manuals

The donor and agency guidelines and manuals currently available on how to organize evaluations of humanitarian assistance explicitly recognize the need for more participatory evaluation processes than have existed in the past. Equally, the Code of Conduct for the International Red Cross and Red Crescent Movement and NGOs in disaster relief states that “ways shall be found to involve programme beneficiaries in the management of relief aid”. The commitment to inclusive and participatory approaches visible in the developmental world since at least the early 1990s is reflected in OECD recognition that “interviews with beneficiaries can be one of the richest sources of information in evaluations of humanitarian assistance”.(4) In Introducing UNHCR’s Evaluation and Policy Analysis Unit (1999), UNHCR undertakes that “EPAU will make particular efforts to work in collaboration with its operational partners and to ensure that beneficiary views are taken into account in the analysis and assessment of UNHCR activities.”

UNDP notes that “in a participatory evaluation, the role and purpose of the evaluation change dramatically. Such an evaluation places as much (if not more) emphasis on the process as on the final output, ie the report … the process is the product ... the purpose of evaluation is not only to fulfill a bureaucratic requirement but also to develop the capacity of stakeholders to assess their environment and take action.”(5) Participatory evaluation gives a voice to those who have lost their usual communication channels and encourages community members to voice their views, gather information, analyze data themselves and plan actions to improve their situation. It recognizes that project stakeholders and beneficiaries are the key actors of the evaluation process and not the mere objects of the evaluation. The prescriptions of the guidelines generally involve a move towards assessment and evaluation as a coherent process. This is linked to a greater involvement of beneficiaries and other stakeholders in terms of both methodology and substantive content evaluation. It represents a process of negotiation and mediation which involves not only including beneficiaries as sources of information but also defining entirely new roles for them.

Beneficiary-based evaluation is most usefully conceived as specifically focused social research, aiming not exclusively to ascertain cause and effect relationships, but also to understand the nature of the situation experienced by various social actors within it. Qualitative, and conceivably also anthropological, research methods and analysis may be the most productive strategies.

There are also practical issues to consider. An evaluation can be neither consultative nor participatory unless it is both planned and documented. Half-hearted attempts, or those which are not fully transparent, do not assist those attempting to win credibility for the strategy. All stakeholders should be aware of the kind of evaluation which is planned. A beneficiary based evaluation may not cover the same ground as an audit of the same programme, and should not be criticized for this. It is crucial that evaluation terms of reference specify that participatory approaches are to be used, and that the additional time that these require is factored into the timeframe.

There is a major question about the extent to which it is feasible to include beneficiary views in the evaluation of programmes which have failed to include these during planning, implementing and monitoring stages. Not only will there be a lack of baseline data for evaluators to use but also such an approach raises questions about how much assistance providers really know about the affected populations with whom they work.

Is the participatory message getting through?

A review of some 250 evaluation reports in the ALNAP database found that “only a few of these evaluations comment on issues of consultation, and few are themselves participatory.”(6) Clearly there is a wide gap between theory and practice. While almost all NGOs speak of the importance of participation, there is a paucity of evidence of participation in NGO evaluations.

Evidence that beneficiary-based methods are actually being employed is generally anecdotal rather than to be found in agency documents. When some degree of informal, opportunistic consultation is used, this is on the basis of personal interest and the availability of time to conduct interviews. This may well contribute to the overall effectiveness and interest of a subsequent report but, without proper documentation, the qualitative methods which have been used are liable to be condemned as ‘unscientific’, ‘impressionistic’ or ‘subjective’.

A study of evaluations supported by the UK Department for International Development described efforts by evaluators to interview members of affected populations as “inadequate”.(7) Tellingly, although the terms of reference of the UNHCR EPAU evaluation report on Kosovo called for the views of refugees and former refugees to be solicited, the main body of the report makes only passing reference to interviews with refugees and gives no description of data collection methods employed.(8) Similarly, despite criticizing the absence of beneficiary community participation in rehabilitation activities in the Great Lakes Region, the UNHCR report of the review of this work itself appears not to have included beneficiary perspectives.(9)

Examination of UNHCR reports indicates inconsistency in recent years with regard to the extent that beneficiary voices have been solicited or heard. It appears that participation of refugees in UNHCR studies has relied on a number of changing criteria, the subject matter of the report, the perspective of the evaluation teams and questions of access and timing. The same judgment can be made of recent WFP evaluation reports.

At times, reports mention beneficiary views without describing how they were identified and who expressed them. Although the inclusion of refugee voices is to be desired, when views are not disaggregated and specific sources of information are not provided, representations must be treated with caution. The absence of information about the nature and structure of affected populations impacts on the way assistance providers make decisions about the kind of assistance required and can be the cause of major tensions within the beneficiary population. A common complaint is that while donors demand such relevant information, they rarely provide the kind of support required to gather it.

Some NGOs have proactively recruited social researchers to spend significant periods of time in field situations in order to generate learning about the populations with whom they were working. Clearly there is advantage to be derived from linking participatory evaluation processes with a better understanding of the socio-economic profile of the beneficiary population and with a greater degree of beneficiary involvement throughout the project cycle. Some of these evaluators have produced papers discussing methodologies and experiences. Such documents, while fascinating, demonstrate the uniqueness of each case, and indicate the difficulty of transposing lessons learned in any degree of detail between programmes.

In the chapter of Oxfam’s 1999 publication on impact assessment, Chris Roche discusses the particular methodological and ethical requirements in emergency situations. He notes that constraints imposed by politics and security routinely mean that key groups, particularly women, older people and children, are not involved in either programme design or implementation.(10)

With such scant mention of canvassing and representing beneficiary views in the evaluation literature, learning the views of disaggregated beneficiary populations is nearly impossible. The reality that emergency assistance programmes are certain not to be experienced and perceived identically by different sections of a beneficiary population is not apparent in evaluations.

Constraints on participation

The experience of the team which carried out the groundbreaking evaluation of the international response to the genocide in Rwanda indicates the complexity of constraints on participation. The head of the evaluation team indicated that it proved difficult to research events as beneficiary recall was generally too hazy to make retrospective assessments. Agencies whose programmes were being evaluated generally had a very poor understanding of the pre-flight social structure of refugee societies. The refugees consulted had an extremely undifferentiated view of the assistance-providing agencies and often talked generally of the Red Cross rather than the constituent agencies of the Red Cross Movement and those who worked with them.

A meeting convened by ALNAP in November 1998 to examine why beneficiary-based methods are not more comprehensively used in the humanitarian community noted that the approach is regarded as time-consuming, difficult to implement in conflict situations and is not required by donors who remain preoccupied by upward accountability.(11) Other explanations have also been offered. Host governments are often hostile to such approaches, informants might be put at risk in situations of political tension or conflict, beneficiary populations cannot be trusted to answer honestly for fear of losing assistance, methodological know-how is missing, no baseline data exists against which to measure change and logistical constraints rule out the possibility of involving beneficiaries in evaluation.

Conclusion

A number of agencies are keen to improve their practice and are interested in a rights-based approach, social learning and development of methods for greater beneficiary involvement in evaluation and other stages of humanitarian assistance programmes. The humanitarian community’s greater interest in stakeholder participation and downward accountability is manifest in the new emphasis on standards in such initiatives as the Red Cross/NGO Code of Conduct(12), the Sphere Project(13), the Humanitarian Ombudsman project(14) and the ALNAP network.(15)

Is it routinely the case that assistance providers truly want to know what beneficiaries think, and that they are prepared to work to overcome constraints to hear their voices? The answer will almost certainly be “yes” if beneficiaries endorse the work they are doing. It may not be the case if beneficiaries disagree in principle with what the organizations are doing, or the way they are doing it. Organizations have vested interests and their own agenda: donor approval of programmes, institutional control and coherence to policy. It remains to be seen whether donors find it in their interests to empower the world’s most vulnerable groups.

 

Tania Kaiser completed her doctorate in anthropology at Oxford University in December 1999. She is currently working as a freelance research consultant for UNHCR. Email: tan_kaiser@yahoo.co.uk.

This article is extracted from a longer paper entitled ‘Participatory and beneficiary-based approaches to the evaluation of humanitarian programmes,’ commissioned by UNHCR and soon to be available from their Evaluation and Policy Analysis Unit at www.unhcr.ch/evaluate/main.htm

Notes

  1. Hallam A, 1998, Evaluating Humanitarian Assistance Programmes in Complex Emergencies, RRN Good Practice Review, Overseas Development Institute, London, 1998, p13
  2. Organisation for Economic Cooperation and Development (OECD), Guidance for Evaluating Humanitarian Assistance in Complex Emergencies , 1999, p17
  3. Pottier J ‘In retrospect: beneficiary surveys in Rwandan refugee camps, 1995: reflections 1999’, in Wageningen Disaster Studies, Evaluating Humanitarian Aid: Politics, Perspectives and Practices, Rural Development Sociology Group, Wageningen University, Netherlands, 1999, p124
  4. OECD 1999, p25
  5. United Nations Development Programme, Who are the Question-Makers: a Participatory Evaluation Handbook, 1997
  6. Apthorpe R and Atkinson P, 1999, A Synthesis Study: Towards Shared Social Learning for Humanitarian Programmes, ALNAP 1999, p8
  7. Borton J and Macrae J, DFID Evaluation Report Dec 1997 EV:EV 613 Evaluation Synthesis of Emergency Aid, ODI, 1997, p2.
  8. The Kosovo refugee crisis: An Independent Evaluation of UNHCR's emergency preparedness and response, UNHCR, EPAU/2000/001
  9. Review of UNHCR's rehabilitation activities in the Great Lakes IES EVAL/01/99, UNHCR 1999
  10. Roche C, Impact Assessment for Development Agencies: Learning to Value Change, Oxfam,1999, p181
  11. ALNAP 1998, Record of October Meeting, ODI London. (www.oneworld.org/odi/alnap)
  12. See www.ifrc.org/pubs/code
  13. See www.sphereproject.org
  14. See www.oneworld.org/ombudsman
  15. See www.oneworld.org/odi/alnap/index.html

 

Renuncia de responsabilidad
Las opiniones vertidas en los artículos de RMF no reflejan necesariamente la opinión de los editores o del RSC.
Derecho de copia
Cualquier material de RMF impreso o disponible en línea puede ser reproducido libremente, siempre y cuando se cite la fuente y la página web. Véase www.fmreview.org/es/derechos-de-autor para más detalles.