Dr. Dimitri Van Den Meerssche
Read PDFRead PDF
The use of algorithmic tools by international public authorities is changing the way in which norms are made and enacted. This ‘seismic shift’ in global governance, as Benvenisti describes it, entails important distributive consequences: the digital turn not only empowers specific actors and corporate forms of expertise but also engenders new modes of social sorting based on algorithmic placements of people in patterns of data. This contribution focuses on the emergent inequalities – on the newly actionable social divisions – that machine learning modules and data analysis thereby import in the domain of global governance. The lines of discrimination and distribution drawn by such algorithmic practices of association and (risk-based) stratification, I argue, should be a matter of greater concern to international law(yers). I thereby conceptualize the salience of algorithmic decision-making processes from a distributional and not a procedural perspective – from a perspective of inequality and not privacy, data protection or transparency. This intervention aims both to reveal the distributive effects of data-driven decision-making and to conceptualize the challenges posed by this algorithmic governmentality to the prospects and emancipatory promises of collectivity, solidarity and equality entertained in modernist imaginaries of international law.
The site selected for the empirical assessment of data-driven inequality is the ‘virtual border’: the ecology of interoperable databases, screening rules, triaging systems and algorithmic risk assessment tools ‘aimed at visualising, registering, mapping, monitoring and profiling mobile (sub)populations’. My analysis thereby intersects with accounts from critical security studies that have qualified borders not only as instruments for territorial division or delineation but also as sites of definition, distribution and discipline. The proliferation of digital technologies in border security and migration management has destabilized traditional understandings of borders as ‘rigid, immobile territorial frontiers’, and inspired heuristics – the ‘shifting border’, ‘mediated border’ or ‘border mosaic’ – that map out the altered geographies, infrastructures and performative effects of bordering practices. The ‘virtual border’ analysed in this article is scattered across digital systems without fixed territorial coordinates and operates as a central site of data extraction and social sorting: it is a system of discrimination and division where the standards of hierarchy or inclusion, as I will show, are continuously kept in play. This borderscape is a center of calculation where data flows, bodies and scattered signatures of past passages or events are assembled as scores amenable to immediate institutional action. This practice of conversion is politically performative: it is where identities are forged and where inscriptions of ‘risk’ circulate, opening or closing doors of opportunity and access. It is where data doubles dwell.
The article focuses on the institutional and operational framework of ‘virtual borders’ that is currently under construction in the Schengen Area. The material is tied to two case studies of ‘smart border’ pilot projects led by consultancy consortia and overseen by Frontex. Responding to the need for new technologies expressed in recent EU regulations on integrated border management, automated visa waiver systems (ETIAS) and the interoperability of data systems, these recent pilot projects reveal the creation of an informational infrastructure and decision-making architecture of ‘virtual borders’ in Europe. In developing artificial intelligence tools for risk assessment and predictive analytics at the border, both pilot projects instantiate the EU’s explicit strategic ambition to ‘leverage’ artificial intelligence for ‘Border Control, Migration and Security’. This ambition recently materialized in a ‘roadmap’ – drafted by Deloitte and published by DG Home – that identifies nine particular areas of opportunity for artificial intelligence, ranging from ‘vulnerability assessment’ in asylum applications or the use of data analytics to detect ‘irregular travel patterns’ to algorithmic screening and ‘triaging’ of visa applications. My analysis of the two pilot projects – iBorderCtrl and Tresspass – is aimed at grasping how systems of algorithmic association and stratification are enacted and employed at the border. How is extracted data clustered into ‘actionable’ computational categories? How are subjects sorted and scored in specific systems of surveillance? Focusing on both these ‘nominal’ and ‘ordinal’ aspects, on both grouping and grading, the article gives an account of the specific forms of inequality – of the novel ‘social hierarchies’ – that are engendered by practices of algorithmic association.