Skip to content
Digital refugee resistance, power, representation and algorithmic censorship
  • Amanda Wells
  • May 2024
Photo from Calais CC courtesy of malachybrowne on flickr.com

Refugees that attempt to use digital media for resistance face barriers, including algorithmic censorship and harassment, that solidify their position in the political margins. This demonstrates the need for greater transparency, accountability and democracy in digital governance.

Refugees and migrants’ issues have often been embroiled in digital political action with varying degrees of success. The photos of Alan Kurdi, a two-year-old Syrian refugee who drowned while crossing the Mediterranean, are credited with fostering public empathy in the 2015 European ‘refugee crisis’ after their rapid spread through digital news and media platforms. Conversely, the use of social media to expose the conditions of Nauru’s refugee detention centre in 2015 led to the eventual expulsion of oversight bodies like Save the Children from the grounds.

Here, I examine how social media is used by refugees seeking to garner public attention through visual outputs to the digital sphere like photographs and videos. Is social media an effective tool for refugee resistance? Drawing on the case studies of visual, embodied refugee resistance in Calais and Amsterdam, I demonstrate that current trends in digital governance push refugees further into the public margins and reduce their ability to weaponise digital media as a political tool.

Power and representation

Social media’s particular strength is that it offers the ability for marginalised groups to express their political movements themselves rather than through the lens of a third party like news reporters and media outlets.

The accessibility of mobile phones and social media accounts quite literally places the power of representation into the hands of otherwise marginalised groups. They are thus free to conduct their political movements on their own terms. In the case of refugees, this is meaningful, as it offers an alternative to the traditional narratives that depict refugees as apolitical, passive subjects who are dependent on influential actors. From a purely visual standpoint, the proliferation of images depicting refugees in protest are a marked contrast to photos of refugees in the media – which often fail to show refugees’ agency and instead emphasise vulnerability and precarity.

The 2016 protests in Calais’ informal refugee settlement nicknamed ‘the Jungle’[i] are an example of how refugees can use digital media to posit themselves as political actors outside of institutionalised political for a. In February of 2016, eight men who had been forcibly removed from their makeshift accommodations in the Jungle as part of a planned demolition, undertook lip-sewing to draw attention to the camp’s resistance movement. The public-facing nature of the camp, coupled with the mobile technology of camp residents and NGO staff, resulted in a wide variety of visual outputs that remain relatively easily available online.

All of the eight protesters donned face coverings, hoods and scarves, to emphasise the collective nature of their protests. They held signs about the conditions of the camp, specifically calling out to their audience (‘Representatives of the United Nations’ one sign read), and referred to international human rights obligations. In doing so, the protesters demonstrated an understanding of the critical visual element to their resistance and attempted to shape the direction that their protests would take online.

Despite the protesters’ active efforts to shape the media resulting from their acts, the photos were nonetheless altered by media outlets and photographers. A commonly circulated, professional photo changed the sign of one protester which read “Representatives of the United Nations and human rights come and bare witness; we are humans” to simply “We are humans.” The photographer made a highly political decision to frame the subject in this way, and to edit the protesters’ message, thinning the substantive thrust of the protesters’ message, and in so doing, participating in a process of co-authorship over the constitution of the protests.

The example of the Calais lip-sewing protests demonstrates that although refugees are able to use social media for narrative change, they are ultimately subject to the interpretation and co-option of other actors. Even when protests may use digital media to bypass third parties or a lack of access to public political spaces, they remain highly subject to outside forces.

Algorithmic censorship and digital harassment

Social media censorship can occur through a literal deletion of content or the under-promotion of undesirable materials, thus limiting their audience and spread. There is a lack of publicly available information on the parameters and conditions by which social media algorithms operate, but they are broadly understood to censor or, at minimum, under promote graphic and offensive content. This would include whistle-blowing photos that report the conditions of refugee camps and detention centres, first-hand accounts of genocide and war, and protests that are centred within the body like lip-sewing and self-immolation.

Very little is known about how machine learning systems are trained for content moderation, but it is clear that algorithmic censorship is not nuanced. In an article in Philosophy and Technology Jeniffer Cobbe writes “marginalised groups reclaiming abusive terms may seem to be abuse to the uninitiated” and therefore subversive material is censored alongside its target. Furthermore, a study by Koebler and Cox found that algorithms are generally better at targeting and removing violent content than hate speech. This allows harassment surrounding refugee topics to proliferate while the voices from the centre of the issue themselves are further excluded.

Algorithmic censorship training occurs on datasets with pre-existing, real-word biases and inequalities. This means that content moderation models are poorly equipped to contend with racial and ethnic minorities, non-English materials, and non-dominant political leanings. These materials may be illegitimately censored or under-promoted as a result.

In some cases, systemic algorithmic censorship and exclusion leads to the subjection of refugees to further digital harassment. In the case of Kambiz Roustayi, an Iranian refugee who self-immolated in Amsterdam’s Dam Square in 2011, censorship of the graphic images resulting from his protest meant that its only public record now exists largely on extremist websites and blogs. The only place that I located visual evidence of this event was on a small-scale website called ‘Documenting Reality,’ where the images were met with cruel comments. “We can all donate something for a good cause, to help people like this man. I am sending a gallon of petrol” read one comment. “God! People actually helped?” asked another.

Kambiz Roustayi now only exists in public memory in relation to the “smell” of his death, for being a “psychopath,” and for being the “start to a bad day.” This is an example of how, when graphic images resulting from refugee resistance are pushed into the political fringe due to censorship, they are subject to further discursive violence.

Karin Andriollo writes of the ethics of attentiveness: “we ought to respond to the public self-sacrifice as if we turn the other way, protest suicides are killed twice, once by their own hands and once by the silence of our imaginations.”

Memory is powerful, and social media can be an effective way to expand the public archive to include those who were marginalised throughout their lives. However, the case of Kambiz Roustayi demonstrates that growing automatic censorship, although perhaps intended to undercut harassment, may lead to its proliferation. This, in turn, reduces the potential utility of social media for political protest and a radical, inclusive ethics of attention. It instead gives way for the further oppression of refugees and migrants. In this way, algorithmic censorship creates the circumstances that allow for the cycle of discursive and physical violence against refugees to continue.

What is needed?

I have argued here that social media can be useful in refugee resistance, but that algorithmic censorship, which both prioritises content from privileged creators and removes graphic content from refugee resistors, weakens its potential.

In light of increasingly complicated issues in content moderation, such as AI generated propaganda and deep fakes, digital platforms must be transparent about the conditions of algorithmic censorship. Opaque algorithmic decision-making is a threat to the collective choice to define our public attention and memory. We, as digital end-users, practitioners, and lawmakers must push for greater accountability, democracy and transparency in digital governance.

 

Amanda Wells
Independent Researcher
amanda.morgan.wells@gmail.com

READ THE FULL ISSUE

[i] The name “the Jungle” has been rightfully critiqued by many for its attempt to Other camp residents and paint them as barbaric and dangerous. I use the name here for clarity, as “the Jungle” refers to a specific time period and encampment structure in the lengthy history of migrant settlements in the area.

DONATESUBSCRIBE
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.