Skip to content
Contesting automation: the NewTech Litigation Database
  • Francesca Palmiotto and Derya Ozkul
  • May 2024

Informed litigation is vital to uphold the rights of migrants subject to automated decision-making. This article introduces the NewTech Litigation Database, a tool for anyone seeking to contest the use of automated systems in migration and asylum processes.[1]

The NewTech Litigation Database collates and publishes litigation against the use of new technologies worldwide.
The NewTech Litigation Database collates and publishes litigation against the use of new technologies worldwide. Credit: Hertie School & bitteschön

The use of automated tools in the public domain to identify, categorise and evaluate individuals raises important legal issues concerning fundamental rights. In recent years, legal challenges related to automation in the public sector have emerged under international and national human rights law.

Courts are currently tackling critical questions, such as how to ensure compliance with fundamental rights and what safeguards automated systems require when used in public decision-making. Civil society is also working to understand how these systems work and contest their use. However, there has been little systematic analysis of how these contestations take place, who is involved in their formulation, and on which grounds they are based.

This article provides an overview of the various methods of contestation occurring in this space. It also presents a new tool, the NewTech Litigation Database, which we have developed as part of the Algorithmic Fairness for Asylum Seekers and Refugees (AFAR) project.[2] This tool – launching in May 2024 – facilitates access to existing case law and associated contestation strategies, helping civil society organisations to make searches, learn from others and find inspiration for their work.

Contestation methods

Automated tools are increasingly being used in public decision-making related to migration and asylum, but information about the existence, details and workings of these algorithms is not always available to the public. This lack of transparency makes it difficult for those affected by new technologies to understand how they work and how to contest them. Our research has revealed that individuals impacted by these technologies are seldom able to contest them. Nonetheless, civil society organisations, activists and political party members have employed various methods to understand and challenge these technologies. These methods include demanding transparency through information requests and parliamentary inquiries, filing complaints with data protection authorities and taking legal action in courts.

Demanding transparency through information requests

To obtain information about automated systems, civil society organisations, specifically non-government and non-profit organisations working on digital rights and technology’s impact on communities, have used Freedom of Information (FOI) requests to seek information from their respective governments. FoxGlove, a UK-based NGO that aims to promote fair use of technology, supported efforts to obtain information about the Home Office’s automated categorisation and risk assessment tool used for processing short-term visa applications in the UK. Similarly, the Public Law Project (PLP) worked to obtain information on the Home Office’s automated categorisation and risk assessment tool to determine sham marriages. In both cases, the Home Office did not disclose all the information, and the basis on which applicants were classified remained unclear. However, at least in the first example, the information helped FoxGlove to file a judicial review, challenging the use of the algorithm under UK equality laws.

In Germany, Gesellschaft für Freiheitsrechte (GFF, The Society for Civil Rights), a non-profit human rights organisation, also made significant efforts to uncover the details of the use of automated mobile phone data extraction by the German asylum authority (BAMF). Prior to these efforts, there was little publicly available information about the details of this practice. To gather information, GFF collaborated with journalist and computer scientist Anna Biselli and carried out extensive research. The information gathered from this research helped GFF take the practice to administrative courts and file a complaint with the Federal Commissioner for Data Protection.

Opposition political parties have used parliamentary inquiries to obtain information about automated tools. For instance, in Germany, members of the Left Party, Die Linke, made multiple attempts to acquire details about the automated mobile phone data extraction and automated dialect recognition tools used in the German asylum procedure. At the EU level, Patrick Breyer, a member of the European Parliament, sought more information about a controversial tool developed in the context of an EU-funded research project called iBorderCtrl. This project aimed to develop an AI-based lie detector to be used on people travelling to the EU borders. However, the Research Executive Agency  of the European Commission denied access to documents related to this project on the grounds that the disclosure would undermine the protection of the commercial interests of the consortium of companies involved in developing the technology for the project.

Filing complaints with data protection authorities

Another method used to contest automation is filing complaints with data protection authorities. In the EU, the General Data Protection Regulation (GDPR) allows for complaints to be made to the data protection authorities (DPAs). DPAs are independent authorities with specific powers. Once a complaint is filed, the DPA must investigate the facts, assess the case’s merits and issue a legally binding decision. If violations are found, DPAs can impose administrative fines and disciplinary measures to rectify the violation and award the data subject damages for violations. They also hold the power to halt or prohibit certain technologies. For example, in 2019, the European Data Protection Supervisor  found that the European Asylum Support Office’s (EASO) social media monitoring of asylum seekers and refugees was carried out without a legal basis and temporarily suspended it. They concluded that EASO must have a clear legal basis for the practice in the future and be subject to appropriate safeguards.

Complaint procedures with DPAs are beneficial as they are less formal, less complex, and generally quicker than judicial proceedings. Additionally, a complaint before a DPA is less costly as legal representation is not required. Moreover, DPAs have investigative powers and expertise in data protection law and IT matters. Civil society organisations have also used this remedial track to stop or limit the use of new technologies. For instance, in Germany, the GFF filed a complaint with the Federal Commissioner for Data Protection in March 2021. The complaint was about the German asylum authority’s automated extraction of asylum seekers’ mobile phone data, arguing that the phone data analysis disregarded European data protection law. Alongside this complaint, they also successfully took legal action in administrative courts (see the section below).

Taking legal action in courts

Civil society organisations and individuals have also challenged the legality of automated tools before courts. In most cases, legal challenges have been brought on human rights grounds, arguing that the use of new technologies was incompatible with the right to privacy, data protection and non-discrimination. Framing cases under human rights law allowed courts to strike down certain governmental uses of automated tools or set specific requirements for their use. One example is the landmark ‘System Risk Indication (SyRI)’ case in the Netherlands. SyRI was used to profile individuals based on a large amount of personal and sensitive data collected from public bodies to detect potential welfare and tax fraud. Contestants argued that this practice violated the European Convention on Human Rights (ECHR). In February 2020, the European Court of Human Rights ruled that the practice was unlawful as it violated the right to privacy.

In the UK, the High Court of Justice declared the Government’s policy of searching, seizing and extracting data from migrants’ mobile phones illegal under domestic law and Article 8 of the ECHR. Similar practices involving the acquisition and automated extraction of mobile phones in asylum processes in Germany were challenged in court by the GFF. However, unlike in the UK, this practice was made possible in Germany when amendments to the Asylum Act were introduced to allow mobile phone data analysis to identify asylum applicants without documentation. Nonetheless, in practice, BAMF was operating in violation of the proportionality principle required by the right to privacy. In 2023, the Federal Administrative Court in Germany ruled that the regular evaluation of mobile phone data by the BAMF during the registration of asylum seekers, without considering available information and documents, was unlawful. In this case, the court did not halt the use of the technology but set strict requirements for its use, with important repercussions beyond the individual case.

Another important legal challenge relates to examining the accuracy and biases of automated systems and, in relation to that, the right to non-discrimination. Two refugees in Canada contested the use of a facial recognition system for its lack of accuracy and misclassification with respect to black women and other women of colour. The court allowed the application for judicial review and accordingly returned the matter for redetermination by a differently constituted panel of the asylum authority. GFF also highlighted inaccuracies and errors in automated systems in the German mobile phone data analysis case. According to the government’s statistics from 2022, mobile phone data analysis reports provided unusable findings in more than half (67.6%) of the cases, which makes it imperative to reassess the reliability of such technologies in the context of asylum procedures.

Finally, applicants also complained about the lack of transparency, which would impair individuals’ procedural rights. In two landmark cases before the Court of Justice of the European Union, two civil society organisations (Ligue de Droit Humains and La Quadrature du Net) challenged the use of passengers’ data in extra-EU flights to prevent and detect terrorism. EU law allowed automated risk assessments to identify travellers that would require further examinations by the authorities. This was, according to the applicants, incompatible with the EU Charter of Fundamental Rights. In the judgments, the court demanded several safeguards for risk assessment technology to ensure compliance with the right to privacy, data protection and effective remedy. In particular, they highlighted the need for reliable technological tools, the obligation of individual review by non-automated means and derived transparency rights for individuals, such as the right to understand how the program works. The court also considered the use of self-learning algorithms incompatible with the right to an effective remedy, as they do not provide sufficient certainty for the human reviewer and individuals.

In summary, by framing contestation on human rights grounds, courts can halt unlawful practices, set specific standards for the use of technology, or derive high transparency standards for automated systems with beneficial effects beyond the individual case.

The AFAR NewTech Litigation Database

As the use of automated tools is still relatively new, methods of contesting them are also emerging and changing slowly. We have developed the NewTech Litigation Database to capture the broad range of contestation methods and their outcomes. It is the first freely available online resource that specialises in litigation against the use of new technologies worldwide. Currently, case law related to contested uses of new technologies is stored in single national databases, often not translated into English. Our database aims to overcome these access and language barriers through a user-friendly interface, visuals and advanced search tools. The database includes the key details and a summary of all decisions. It includes judgments, decisions, or opinions from national and international courts and Data Protection Authorities with a broad geographical scope (thanks to the work of national rapporteurs worldwide).

At the time of writing (February 2024), the database includes records of fifty litigation cases pertaining to contested uses of new technologies in the public sector. These cases include several public law areas, such as education, administration of justice, law enforcement, migration and asylum governance, admission to public offices and tax enforcement. Of the recorded cases, fifteen specifically deal with migration-related issues, including asylum. The database provides a detailed summary of all decisions in English, categorised according to the sector, country and authority. It also indexes each decision or judgement according to the type of contested technology (e.g. facial recognition), the emerging legal requirements (e.g. transparency), the ownership of the concerned tool (private or public) and the rights impacted. The database is a valuable resource for researchers, practitioners and policy-makers working across all aspects of new technologies and human rights. It also aims to raise awareness and provide transparency about the extent and impact of new technologies, informing and supporting the work of legal actors and civil society organisations.

Conclusion

Our research into the existing contestation methods reveals that civil society organisations and activists have taken most of the actions, while little has been initiated by individuals affected by automated tools, possibly due to a lack of knowledge and resources. Our analysis also found that actors have attempted to challenge automated tools through various means. Due to the lack of transparency, they may need to start their contestation by seeking details about the workings of the concerned tools via information requests. Once they have obtained enough information and evidence, they can take legal action in court. Alternatively, filing complaints with DPAs can provide fast and easy remedies for data protection violations. We strongly encourage anyone interested in challenging the harmful uses of new technologies to stay informed and take action by using the NewTech Litigation Database. This database provides valuable information on existing legal strategies and case law, which can help individuals protect their rights against those in power.

 

Francesca Palmiotto
Post-Doctoral Researcher, Centre for Fundamental Rights, Hertie School
f.palmiotto@hertie-school.org X: @FPalmiotto

Derya Ozkul
Assistant Professor, Department of Sociology, University of Warwick
derya.ozkul@warwick.ac.uk X: @DeryaOzkul

READ THE FULL ISSUE

[1] The information presented in this article, including data from the NewTech Litigation Database, has been collected as part of the AFAR Project, funded by the Volkswagen Foundation.

[2] If you are interested in the project, follow our countdown to the launch on X: @AFARproject and visit the AFAR project website to access the database.

DONATESUBSCRIBE