The value of transparency, replicability and representativeness

We are troubled by Graeme Rodgers’ criticisms (FMR21 pp48-49) of what he calls positive social research, i.e., attempts to make ‘value-free’ descriptive and causal inferences about an existing reality.

The kinds of participatory ‘hanging out’ techniques Rodgers advocates have played an important part in the development of knowledge about forced migration. Our purpose is not, therefore, to dismiss qualitative or exploratory approaches, or to demand an unmitigated shift away from qualitative methods. Rather, we call on policy-oriented researchers to develop and refine a set of standards that apply equally to qualitative and quantitative methods. The research community must work together to articulate standards and to recognise that what we can and should do with our research is fundamentally dependent on how we conduct it.

We are not advocating a single approach to data collection in forced migration research but do call for researchers to abide by a set of general research standards. Data collected for the purpose of influencing policy should be representative of the affected population and, ideally, allow for comparisons across groups and sites. Doing otherwise heightens the risk that policy recommendations will result in strategies that are ineffective or harmful to beneficiaries or other affected groups.

Standards of transparency, replicability and representativeness need to be combined with awareness of ethical standards able to promote the security of dignity both of researchers and those with whom we engage.

Transparency allows other researchers to critique indicators, data collection techniques and logical assumptions. Replicability enables others to use a study’s methods and confirm or challenge its findings and allows for comparative analysis, theory building and the search for generalised patterns of cause and effect. Representativeness not only improves the quality of description and analysis but also helps ensure that recommendations lead to policies that will be universally beneficial – rather than exclusively benefiting a studied sub-group.

The risk that Rodgers ascribes to quantitative or macro-comparative research – that questions shaped by arrogant researchers will favour the interests of governments, aid agencies and western academics over those of forced migrants – are real but can be avoided. There is no reason that ‘objective’ data need serve the need of the powerful. Methods such as representative sampling techniques do not require uncritically importing variables, questions or interview techniques. Even large-scale surveys can, and often do, employ participatory approaches to generate hypotheses, questions and analytic categories.

The approach we advocate, and employ in our own research, calls for extensive review of existing ‘local knowledge’ (whether verbal or in print) and field-testing of concepts, questions and interview instruments. Using focus groups, key informants and pilot studies to identify the communities’ concerns can lead to locally relevant instruments. Field and community testing helps expose inappropriate concepts, untenable questions and ineffective interview techniques. Working with local groups to help explain survey results further improves the findings’ validity.

Those who spend months or years ‘hanging out’ are effectively unassailable because they claim a ‘deep knowledge’ that is inaccessible to outsiders. A claim to having been adopted by a family, village or other group is an extreme technique of promoting one’s exclusive right to speak on their behalf. Such deep, personal involvement may also encourage researchers to employ data collection practices that are themselves illegal, which expose networks and practices that heighten forced migrants’ existing vulnerabilities, or unduly value the experiences of one group over another. These risks can be averted if forced migration researchers adopt a practice already common in the social sciences: making data sets public and making data collection methods explicit.

Aid agencies, policy makers and academics all use knowledge in ways which provide strong incentives for presenting research as definitive and the researcher as ‘expert.’ Doing so is often a prerequisite for winning policy influence, research grants, consultancies and professorships. There are also incentives for hiding faulty data or making claims that are relatively unsubstantiated. In order to ensure the field’s continued academic viability and ability to ethically influence policy we must insist on rigorous research methodologies. Ultimately, it will only be by recognising the politics and processes of producing and consuming knowledge that we can conduct more effective and ethical policy-oriented research.

 

Loren Landau is Acting Director, Forced Migration Studies Programme, University of the Witwatersrand (http://migration.wits.ac.za). Email: landaul@migration.wits.ac.za. Karen Jacobsen directs the Refugees and Forced Migration Program at the Feinstein International Famine Center, Tufts University, Boston (http://famine.tufts.edu/work/refugees.html). Email: karen.jacobsen@tufts.edu.

 

This is an extract of a longer article available here.

 

Disclaimer
Opinions in FMR do not necessarily reflect the views of the Editors, the Refugee Studies Centre or the University of Oxford.
Copyright
FMR is an Open Access publication. Users are free to read, download, copy, distribute, print or link to the full texts of articles published in FMR and on the FMR website, as long as the use is for non-commercial purposes and the author and FMR are attributed. Unless otherwise indicated, all articles published in FMR in print and online, and FMR itself, are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) licence. Details at www.fmreview.org/copyright.