Skip to content
Measuring social cohesion: lessons from Kakuma Camp

With the increased attention paid to social cohesion in refugee policy, there is greater need for robust methods of measuring cohesion among displaced and displacement-affected communities. At the project level, organisations that have adopted social cohesion goals into their programming require indicators for project evaluation. At the national and sub-national levels, monitoring mechanisms such as UNDP’s Regular Perception Surveys in Lebanon are gathering data on cohesion and tension to improve conflict sensitivity among aid actors.[1] And at the broadest level, funding bodies such as the World Bank are investing in research to generate evidence on the factors that influence cohesion in contexts of displacement, which could be used to develop best practices for programme design.[2]

In Kenya, the World Bank has played an important role in supporting the socio-economic integration agenda pursued by the government and UNHCR. This includes research on social cohesion in urban and camp contexts. Questions on cohesion have been incorporated into various surveys conducted by the Bank and its partners,[3] including large-scale socio-economic assessments of the refugee populations in the Kakuma camps and Kalobeyei Settlement.[4]

Research instruments to study cohesion must be designed with attention to the particular institutional landscapes and policy priorities in any given context. For example, in the 1990s, social cohesion in Canada, the EU and other high-income countries was defined with a strong emphasis on equality. But in Kenya, refugees have a subordinated legal status and are subjected to strict encampment policies. The integration agenda is restricted to socio-economic dimensions, including the promotion of self-reliance for refugees and merging humanitarian and national service provision into joint systems. As such, a survey question asking refugees in Kenya about their sense of ‘equality’ would seem out of touch. ‘Cohesion’ only really makes sense in regard to the expectations that people have for their place in a community, which is shaped by unequal legal statuses and the policy environments in which they find themselves. These factors, among others, complicate the ways that people interpret and respond to survey questions about social cohesion.

In 2022, the ‘Social Cohesion as a Humanitarian Objective’[5] research team developed a strategy for assessing social cohesion research instruments used in Kakuma. We conducted a standard survey with a small but diverse sample of 30 respondents, immediately followed by an open-ended interview. The validity of common survey questions was evaluated based on similarities and differences between survey responses and how people described refugee-host relations in their own words.

In many cases, we found that an individual’s survey responses were inconsistent with their interview comments. For example, in the survey, one South Sudanese respondent disagreed with a statement that the host community is trustworthy. But in the interview, he provided an optimistic image of “peace and unity among the refugees and Kenyans”. Conversely, when asked about the trustworthiness of refugees, one Kenyan man responded positively. But in the interview, he signalled caution: “[Refugees] ask us to join them [on the football pitch], but we know that they are problematic people. So we refuse.”

These observations highlight one pervasive problem with how social cohesion data is gathered: the closed-ended survey format. Respondents are often required to choose between binary options (yes or no) or to rate their sentiments on a scale (such as from ‘strongly agree’ to ‘strongly disagree’). But people’s perspectives on their social environments are often too complex or context-dependent to be captured in this way. As one South Sudanese woman explained when asked about relations between refugees and the local community:

There are some good things about the way people stay together here, but sometimes conflicts arise. God created people differently. Some are criminals, while others say people should live in peace. A criminal or a drunkard will bring chaos and disagreement between people. It is not all of them, but this is the problem.

Such ambiguity is oversimplified when responses are restricted to linear scales or reduced to a simple position like ‘high trust’ or ‘low trust’. Similarly, broad categories like ‘refugees’ and ‘host community’ sometimes encompass too much diversity to elicit a meaningful response on a perception survey. In our interviews in Kakuma, assessments of the ‘trustworthiness’ of refugees varied drastically depending on which demographics were specified. Similarly, when asked about their own community, local Kenyan respondents highlighted the different motivations and lifestyles of those living near the camp and those living further away across the river.

Pending a full analysis, several key lessons emerge from a preliminary review of our findings:

 

  • Metrics for social cohesion must be adapted to each context. Questions that seem obvious may be interpreted differently by various groups. For example, some surveys ask if the respondent ever shares meals with people from other communities, an act assumed to measure intimacy. But in Kakuma, refugees often exchange meals for firewood and charcoal sold by locals. These interactions are more transactional and less intimate than imagined during survey design. Qualitative research is crucial to developing social cohesion indicators relevant to each context. This includes both preliminary ethnographic research to inform survey design and post-design qualitative validation to understand how the questions are interpreted.

 

  • Analysis of perception surveys should focus on extreme answers. In our study, those who provided moderate responses to survey questions about the trustworthiness of other communities often conveyed ambiguity or ambivalence during the interviews. But those who provided more extreme answers had stronger alignment between their survey and interview responses.

 

  • Perception surveys are a very limited measure of cohesion. Consider a survey that asks about the trustworthiness of refugees: even if 90% of the responses are very negative, this does not provide a reliable guide to actual practices of trust and cooperation in everyday life, such as lending money or sharing personal information. Responses to questions about abstract categories of people are shaped by contemporary stereotypes and popular narratives. The responses tend to be different if interview questions ask about individuals, such as neighbours, co-workers or friends. Perception indicators should therefore be accompanied by more specific measures of cohesion, such as the extension of credit or marital ties across communal lines. However, such measures require a concrete vision for how a more cohesive refugee-hosting society should look, which is often lacking in programme design and policy-making.

 

Stephen Hunt stephen.hunt@ucl.ac.uk

Research Officer, Refugee Studies Centre, University of Oxford

 

Cory Rodgers cory.rodgers@qeh.ox.ac.uk @CoryJRodgers

Senior Research Fellow, Refugee Studies Centre, University of Oxford

 

[1] Survey results can be viewed on the UNDP and ARK Interactive Dashboard, available at: https://bit.ly/communal-relations-lebanon

[2] See the recently launched working paper series on Forced Displacement and Social Cohesion, implemented by the World Bank, UNHCR and the FCDO. https://bit.ly/WB-social-cohesion

[3] See Vemuru, et al. (2016) ‘Refugee Impacts on Turkana Hosts: A Social Impact Analysis for Kakuma Town and Refugee Camp’ https://bit.ly/vemuru-turkana and Betts et al. (2021) ‘Social Cohesion and Refugee-Host Interactions: Evidence from East Africa’ https://bit.ly/betts-east-africa

[4] https://bit.ly/kalobeyei-2018

[5] https://bit.ly/social-cohesion-socho

DONARSUSCRIBIRSE
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.