DATA LIFE Conference

Illustration til DATA LIFE konference 0511204

Conference theme: Exploring the multiple dimensions of data life through critical data studies

One-day conference co-organized by INTERSECT research group, DATA LOSS project, DATA STORIES project, The Centre for Culture and Technology, and The Aesthetics of Bio-Machines.

The DATA LIFE Conference invites scholars, researchers, and practitioners to submit papers that delve into the complexities and implications of data in our world. As data becomes an increasingly integral part of our lives, it is vital to critically examine the power dynamics, ethical issues, and social impacts associated with its production, management, and use.

Two keynote speakers will open and close the conference: Rob Kitchin (Maynooth University) and Stefania Milan (Amsterdam University). DATA LIFE is a one-day conference co-organized by INTERSECT research group, DATA LOSS and DATA STORIES projects.

We welcome contributions that explore, but are not limited to, the following themes:

  1. Data Power and Politics: examining power structures in data collection, processing and use.
  2. Data Justice: analysing inequalities and biases in data systems and algorithms.
  3. Histories and Cultures of Data: historical and cultural perspectives on the evolution of data management practices.
  4. Data Policies and Regulation: challenges and opportunities in local and global data governance.

We look forward to your contributions to the DATA LIFE Conference and exploring the multifaceted dimensions of data with you.

Further information

The CFP is now closed. If you wish to register to attend the conference, please contact Louis Ravn.

It will only be possible to attend the conference in person.

Programme

09:00 Opening

09:30 Introduction: Juliette Davret

09:45 Keynote Stefania Milan (45 min talk, 15 min Q&A)

10:45 Coffee break

11:00 Session 1

 

1.1 Samuel Mutter, Danielle Hynes (Maynooth University)

Data Narratives of Irish Housing, Planning and Property

In the midst of a housing crisis in which accusations abound of who and what are holding up the supply of new homes data is mobilised within a wide range of narratives: to offer certainty in uncertain times, to persuade, provoke and sell. At the same time, data is criticised for its partialness, instability and veracity. This paper uses initial findings from the Data Stories project to examine the data narratives of housing, planning and property in Ireland, to examine how actors across the Irish system mobilise data to express desires, make arguments, justify decisions, and facilitate actions.It draws upon qualitative analysis of a corpus of documents produced by state, business, academicand civil society actors on the theme of the planning/development pipeline, supplemented by amore wide-ranging set of 124 semi- structured interviews with housing, planning and propertystakeholders conducted by the Data Stories team. From these empirics the paper sketches out four prominent – yet partial and overlapping – data narratives, naming these as governmental, commercial, ideological and affective. Referring to work on narrative qualities of data (e.g. Dourish & Gómez Cruz, 2018; Currie & Hsu, 2019), for each data narrative we identify associated affordances (Fjørtoft & Lai, 2021) and temporalities, and point to possible data valences (Fiore-Gartland, 2015).

Additionally, we focus in each case on the specific ways in which data and narrative elements scaffold one-another into a coherent or digestible form suited to telling (or countering) specific stories. We find the different data narratives to express differing logics, such as the evidence-based control and transparency of governmental tracking, the opportunity (for

competitive advantage) of commercial ‘insights’ and forecasts, the ideological mobilisation of data to suit received explanations and long-standing tropes or visions, or the affective use of data to provoke or inspire emotion, attention or rapid action. While in tension, we also find that these logics overlap, and are often found to coexist within the same processes of scaffolding. As such, our conclusions reflect on the socio-political ramifications of such processes of scaffolding in the context of Irish housing and planning and the theoretical implications for thinking data and narrative together.


1.2  Matthias Leese (ETH Zurich), Michael Meyer (University of Lausanne)

Policing à la bricolage: Creativity and care in police information systems

“What I see are silos. [...] We have lots of silos and we move our data from one silo to

another, from one application to another.” This is how an IT specialist, during an interview, talked about the information infrastructures in the police department that they worked at. The interview had been set up to learn about the management of police data from a technical and infrastructural perspective, but quickly turned into the interviewee expressing some level of frustration about the overall lack of appreciation and resources that the department devoted to their databases and related tools such as software and user interfaces. Needless to say, the IT specialist was not happy about the existence of silos in the first place and was of the opinion that databases should, on the contrary, be properly connected to ensure seamless data flows, interoperability, and management processes. In fact, the existence of not only silos, but also the makeshift nature of some of these database silos did come as a bit of a surprise during the field research that the interview was a part of. After all, one would expect that in hierarchical organizations such as the police, information infrastructures would be more coherent – and more importantly: more coherently managed. And yet, during many other interviews, as well as meetings and demonstrations, it became apparent that the above-mentioned silos could be understood as emblematic of an information systems architecture that defied such expectations. This paper takes the existence of information silos in police organizations as a starting point to engage improvisation and creativity in police information systems. Building on the work of information systems scholar Claudio Ciborra, and more specifically on his argument that creative, make-shift forms of tinkering have the potential to re-make information systems in ways that are better suited for the everyday informational needs of those that work with them, the paper shows how in the police department under study, a custom-built database silo was used to separate information on vulnerable persons from the rest of the information ecosystem, thus playing a key role in enabling the social services unit to carry out their protective mandate. Following Ciborra, the paper concludes that rather than trying to subject such rogue infrastructures to top-down organizational control, there is value in bottom-up approaches to information systems that embrace improvisation rather than stifling it. The presented case study thereby illustrates the role of information systems in how the police engage with vulnerable populations and how bricolage work enables the social services unit to assume a care function.


1.3  Irina Shklovski, Natalie Avlona (University of Copenhagen)

The challenge and paradox of data quality

Data is the core requireents of algorithmic systems that are rapidly colonising every aspects of the digital society. Despite the many critical debates, conferences and special issues on the topic of data, little attention has been paid to the otion of data quality. Amidst the calls for esponsible AI development and data justice, data quality seems to be simply presumed as something that, while injustices, biases, and failures of may AI systems have been clearly tied to the problem of data quality. The recently approved EU AI Act sets out high quality training data as one of the requirements for the development of AI systems classified as high- rish. While data quality, as a concept, has long been discussed and defined in the domains of information systems and data managementm there is broad agreement from technical point of view that there is a lack of systematic research on assessing data quality. Existing metrics are difficult to apply in assessments of training data for AI Sytem development and current definitions continue to rely on the original definitions based on "fit for customer use" developped in the mid-1990's. Analysis of existing industry standards on data quality demonstrate significant disparities in terminology and scope with respect to the apparent expectations of the EU AI Act. ritical research note that among AI developers, issues of data quality tend to be dismissed in favor of model considerations. Very little of this research considers how data quality is achieved. Where susch considerations are extended, they tend to focus on data provenance and the reputatio of data providers as proxies for quality assurance. In this paper, we ask what does it take to achieve data quality in the creation of training data for medical AI systems, classified as high-risk by the EU AI Act. We explore justifications and logics of practice as data experts work towards the different dimensions of data quality, susch as accuracy, structure and timeliness and how these dimensions come to be structured, limited and ordered by the dimensions of regulatory compliance. We find that the notion of data quality operates as a vagye ideal, a set of unachievable goals that in the end produce datasets out of a series of compromises that must be made. Unsurprisingly, datasets an dtheir flaws are deeply sitauated in the contexts of their production, yet much of this situatedness is under-documented. Evenrtually, claims of quality begin to rely on producer reputation when datasets travel or when arguments must be made that AI systems are constructed responsibly.


1.4  Jef Ausloos (University of Amsterdam)

The Academic Data Gaze

Civil society, including academia, has long decried the legal, technical, and financial obstacles preventing them from scrutinising the data, algorithms and general operation of digital infrastructures that have nested themselves into every part of society. Over the years, a wide variety of measures have been proposed and tested to overcome these obstacles, not in the least transparency and data access provisions in a growing number of digital policy initiatives. The debates on (academic) researchers’ claim to data in, and about, digital infrastructures reveal deeper questions on the politics of knowledge production. Underlying the growing efforts to improve researchers’ access to data, is a strong normative assumption that such access (and data) is generally beneficial. While I sympathise with many such efforts, I believe it important to critically engage with these underlying assumptions by also locating it in the political economy around it. Failing to do so may render academia complicit in a variety of problematic dimensions of digital technology. Moreover, it may amplify several issues already inherent to dominant academic research practices, including the Matthew effect. This paper attempts to condense these different issues and dynamics by theorising the notion of the ‘academic data gaze’, compounding existing strands of critical work into a reflective exercise for academia. The paper will situate the academic data gaze within the hegemony of mainly Western academic research and its underlying logics. I take inspiration from Beer’s conceptualisation of the data gaze, which is fundamentally intertwined with modern day forms of capitalism and its deeply extractive logics. The concept acknowledges the artificiality of data and builds on growing recognition that datafication processes – often (co-)constituted and adopted by academia – have very problematic roots; from oppressive and racist regimes underpinning slavery and Nazi Germany, to the persecution of black and brown people in the US, as well as centuries of colonial extractivist practices more broadly. Scientific research has played a crucial role in enabling and validating these regimes, reproducing social hierarchies. This historical trajectory of both science and datafication processes cannot simply be ignored and should actively factor into a constant self-reflective praxis. The academic data gaze emphasises academia’s complicity in establishing the ‘data imaginary’ as a dominant paradigm structuring contemporary societies. Despite ample critical scholarship, datafication processes have developed into constitutive elements of modern modes of scientific knowledge production. As such, academia’s growing adoption of data-driven research methods has significant political economic reverberations, legitimising the data gaze’s salient claims to objectivity, neutrality, rationality, and universality. Yet it often forgoes how ever-expanding processes of metrification, extraction, and abstraction, deprioritise or invisibilise certain modes of knowledge (production), reinforce a variety of power dynamics, and may cause harm. One of the paper’s guiding questions will therefore be how, and at what expense, legal data access frameworks and data-driven research methods construct or reinforce specific frames of truth and knowledge? The paper will be a plea for academia to confront its ‘epistemology of ignorance’ and continuously reflect on the emergence and use of dominant research methodologies.


1.5  Frederik Schade (University of Copenhagen)

From cultures of destruction to cultures of deletion in the digital state bureaucracy

In the context of the digitalized bureaucracy, academic literature has focussed on the capacity of contemporary state apparatuses for data production rather than data destruction. The bureaucratic destruction of data and information, however, has a long history, as governmental practices of knowledge production have been accompanied by corresponding practices of institutional forgetting (i.e., “agnotological” practices). Based on an empirical investigation in the context of the Danish government, this paper proposes that digitalization signals a qualitative shift in the agnotological orientation of state bureaucratic institutions as they shift from cultures of destruction to cultures of deletion. While often equated, the now increasingly institutionally salient notion of data “deletion” tied to digital technologies implies a qualitatively different logic and semiotic-material configuration than that of “destruction” in both digitalized and paper-based regimes. The instantaneous physical- mechanical violence, finality, and publicity of destruction has been closely associated with traditional ideas of sovereignty. This paper proposes that deletion, in contrast, is intrinsic (rather than extrinsic) to computational – and thus, administrative – systems, restorative of the medium (rather than de-structive), invisible to the human eye (rather than public), and potentially reversible (rather than final). Furthermore, in a political context where digitalization and sustainability figure as prominent societal ideals, software-based deletion appears as the “sophisticated” solution relative to destruction and the risks of electronic waste production (i.e., whereas destruction “breaks” and “tears apart”, deletion “cleans” and “wipes away”). These characteristics make digital deletion both attractive and problematic for the digital state bureaucracy due to deletion’s potential for programmability, automation, and sustainability, on one hand, and its problematizations of institutional oversight, manageability, and certainty, on the other. In response to the increasing importance ascribed to the possibility of “secure deletion” in both emerging law on privacy and IT security, the paper concludes with a discussion of the prospects of state sovereignty, administration, and accountability under the paradigm of deletion and its potential eGects on processes of (digitalized) democratic state formation.


30 min discussion and Q&A

 

12:45 Lunch

13:45 Session 2

 

2.1 Klaus Bruhn Jensen (University of Copenhagen)

Encoding and Decoding in the Human-Machine Discourse

The field of human-machine communication (HMC) research (Guzman et al., 2023) , over the past decade, has embraced a new category of communicative agents – chatbots, personal digital assistants, social robots, and more. Compared to computer-mediated communication (CMC) research (Lipschultz et al., 2022) , in which people are said to communicate through machines, HMC research asks how – in what sense – people communicate with machines. More than channels, some machines serve as partners in communication. More than transmitting information, they participate in the co-construction of meaning according to codes that are, at once, socially contextualized and computationally implemented. As such, HMC studies represent one of the most sustained interdisciplinary initiatives to bring the theories and methodologies of communication research to bear on the social uses and consequences of artificial intelligence. As recognized by a ten-year review of HMC studies published in communication journals, however, a “fully fleshed-out HMC theory” (Richards et al., 2022, p. 54) is still in its early stages. Building on Stuart Hall’s (1973) seminal work about the encoding and decoding of television, this paper presents a model of HMC that conceptualizes and operationalizes the distinctive forms of encoding and decoding which constitute human-machine interactions, situating HMC in the wider field of communication theory (Craig, 1999) . First, the paper briefly reviews the history and legacy of Hall’s intervention, which helped move communication theory beyond both the linear models of social and technical sciences (Lasswell, 1948; Shannon, 1948) and predominantly text- centered humanistic models (Jakobson, 1960) . While most commonly associated with the 1980-1990s research tradition exploring the surprisingly diverse decodings of mass media content by remarkably active audiences (Liebes & Katz, 1990; Lull, 1980; Morley, 1980; Radway, 1984) , Hall (1973) had, in fact, outlined a comprehensive model of the production and circulation of meaning in society, identifying three conditions structuring the meaningful discourses that make up communication: technical infrastructures, structures of production, and frameworks of knowledge. Second, the paper translates Hall’s (1973) model to the study of human- machine discourses: In digital communication systems, humans and machines variously encode and decode data to make sense of each other. To capture the distinctive nature of the resulting human-machine discourses, the third section introduces the category of metacommunication (Bateson, 1972) , which has been all but neglected in both HMC research and critical data studies. In addition to accounting for some of the uncannily human-like qualities of human-machine communication, metacommunication in digital circumstances serves to specify some of political and ethical dilemmas presented by the long lives of data. The fourth and final section of the paper raises the question of how communication theory going forward may accommodate human-human, human-machine, as well as machine- machine discourses: “How do we accommodate the study of interactions between people and between people and AI within the same discipline?” (Guzman & Lewis, 2020, p. 81).


2.2  Magdalena Tyzlik-Carver (Aarhus University), Lozana Rossenova (Leibniz Information centre for science and technology university), Lukas Fuchsgruber (TU Berlin)

Fermenting Data, or what does it mean for data to get a life? (experiments in Curating Data)

In a popular parlance, addressing someone to ‘get a life’ is an expression of contempt for their lack of experience of the real world, and it defines a person as one without friends, hobbies, and concentrated on mundane activities mostly defined by job obligations. In this paper we speculate on what it means for data ‘to get a life’, by introducing our collaborative curatorial and research project Fermenting Data which brings together practices of fermentation and data processing. Fermenting Data initiated in 2020, aims to engage in sensing and sense-making with data and fermentation acting as both a metaphor and material process to speculatively engage with data. The idea to mix these two seemingly unrelated practices and processes, starts from a desire to reclaim data as common practice that is available and accessible as broadly as possible, and to engage artistic and curatorial rigour through speculative methods in order to open up to opportunities for exploration and research that are afforded by practice-based methodologies. This is why the project is both a curatorial and research task, carried out through workshops, public exhibitions, and use of free, collaborative software such as Wikibase and its linked open data database. If curators’ care for objects is also about making them accessible to different publics, our intention here is to think with the project of making data public as a form of ‘getting a life’. We see this as an epistemological quest about knowledge and how it is made with data, who is involved and at what stage. Ubiquity of data does not necessarily translate into accessibility. Here issues are many and not only involving making data open, promoting skills, developing ethics of data use and re-use, and making data infrastructures public, etc. Rather they start with basic questions about data and what they are: who is involved in generating data, why do we need so much of it, and how are they part of knowledge making? Using a speculative prompt that asks: what if data could be fermented? introduces a playful entry into the data processing world for anyone, expert or not, yet intrigued by the vision of fermenting data. While we translate the idiom of getting a life as data becoming social and public, we understand this process in an expanded way that sees the social in more-than-human terms. Inspired by the work that bacteria and enzymes do during fermentation we insist on making information and data tangible and we challenge ourselves with the question: how do we learn from life- transforming properties of bacteria that created this world, and how can we bring this knowledge back into data processing to replace extractive practices of processing data with those based on symbiosis and self-maintenance?


2.3  Kristian Byskov, Tina Ryoon Andersen (Norwegian University of Science and Technology)

Playing the Memory Game – An Artistic Research project on the making of a memory learning software

As we increasingly use digital devices in our everyday lives our ability to remember is gradually disappearing, and we are facing the phenomenon of “digital amnesia”. This scenario is the backdrop for the experimental theatre performance Labyss by The Algorithmic Theatre that was presented at Den Frie Center for Contemporary Art in Denmark in 2023 and at Tai Kwun Art Center in Hong Kong in 2024. Labyss explored the critical issue through the lens of a fictitious tech start-up, whose core product—a memory-learning software— promised to collect, reconstruct, and preserve intimate personal memories using advanced AI algorithms, positioning itself as a technological solution to a technologically produced problem. Labyss, concurrently, was the result of an artistic research project that delved into what seemed to be the new frontier in tech development, intimacy . Questioning how digital technologies increasingly insert themselves into the most intimate parts of our bodies and lives, thereby challenging the boundary between technology and intimacy. As part of the artistic research project, we wanted to explore: How does a software understand what memories are? How are memories constructed in the first place? Can data be sensuous? In what ways can our memories be harvested and turned into data? What kind of knowledge on technology can be produced by the application of artistic research methods? At the conference, we would like to present learnings on the making of a memory-learning software – a process where we trained a model so it could be able to converse with people about their memories with the intent to get more detailed stories. We will explain the processes of memory collection, algorithmic learning, and the implications of turning memories into data using Labyss as our case study. We will also reflect on the experiences we had when bringing the experimental theatre performance and the memory learning software from Denmark to Hong Kong. Additionally, we will reflect on knowledge production at the intersection of art and technology and highlight how artistic research methodologies – spanning visual and performing arts – can contribute to a deeper understanding of the social and ethical dimensions of data life. Finally, we will touch upon how performance can serve as a platform for exploring our evolving relationship with technology. The Algorithmic Theatre is an interdisciplinary working group and an investigation of what new algorithmic technologies do to our bodies, identities, and society. It is an artistic research project at the crossroads between performing arts, visual arts, and programming, which takes a critical, investigative look at algorithmic mapping and monitoring – in short, the increased algorithmization of life. At the DATA LIFE conference, Kristian Byskov (visual artist and writer) and Tina Ryoon Andersen (curator) will represent the group.


2.4  Shirley Chan (Lund University)

Perserving fan imaginaries

Over time, how can data generated by platforms and communities be understood? What can the data tell about these settings? How do the ways in which data is created, used and circulated affect how the communities will be represented? These questions are shed light on in my project, aiming to gain knowledge of what platforms and online communities do in the present and implicate what can be understood about them in the future. Specifically, I approach these questions with preservation in mind, the challenges of preserving the data, and the evolving and dynamic contexts they emerge from. The internet and social media platforms have lowered the threshold for community engagement. For fans, the internet and social media platforms have offered significant possibilities to find like-minded people with whom they can discuss, share fan work, and create collective identity, meaning and memories. Simultaneously, the social media platforms’ datafied and algorithmic structures and governance are entangled with their practices, activities, and relations. Together, the platforms and communities enact an infrastructure of fandom through which imaginaries of being part of a fandom, community and platform emerge. How the infrastructure of fandom is enacted shapes data creation, usage, and circulation in the present and the understanding of the communities in the future. Even in the present, data need to be related and given context to be understood and representational. Accessing and understanding this data over time will be challenging due to the vast volumes created, its dynamic and interconnected character and the fleeting and evolving context. These aspects make it difficult to interpret data over time, risking turning it into traces. To make the data meaningful over time, information about its context of creation and use is required. But what is the context of the infrastructure of fandom? What should be included, and when does it end? How does it affect possible future representations of the communities? Thus, examining how the infrastructure is enacted through how data is created, used, and circulated becomes crucial. This allows us to gain insight into what context could be and the challenges of defining and limiting what it is. For the study's purpose, I conduct an ethnography inverting the infrastructure. This entails making the practices, activities, and relations supporting the infrastructure visible.

Specifically, I attend to the dynamics of the infrastructure, including the alignments, tensions, and discrepancies underlying the data. In the analysis, these dynamics reveal the multiple fan imaginaries underlying the data, the different meanings and values enacted through the data and when the data are actualised. The field sites consist of two fan communities surrounding the Marvel Cinematic Universe on the platforms Reddit and Tumblr. I also draw on critical events that affected the infrastructures, like the Tumblr porn ban in 2018 and Reddit APIcalypse in 2023. The ethnographic methods adopted include participant observation, semi-structured interviews, and document analysis. The thesis contributes to advancing knowledge of how we can develop preservation in ways that enable future access, use, and understanding of the digital culture emerging now.


2.5  Paul Heinicker (FH Potsdam)

Data Sadism - Desires in Data Production

Talking about data expresses a desire. Dominant motifs such as data-driven, big data or, most recently, artificial intelligence always reflect expectations. Data is supposed to drive processes that can only be processed in their unmanageable mass through automated calibration. As contemporary narratives about and with data, they formulate dreams and desires in dealing with their seemingly unbridled emergence. In the so- called data age, the focus is less on questions of what data is, where it comes from and why it is needed, and more on the hope of being able to do something useful and innovative with it.[1] So why do we want data? While the concrete production and use of data is intentional, i.e. conscious, the desire, longing or fear behind the data cannot always be named in concrete terms and is rather unconscious. Despite all the automation, it is mentally and physically involved people who initiate data processes.[2] It should therefore be noted that unconscious dynamics are inscribed in every data production. Many of the actual reasons for data remain hidden.

However, an attempt can be made to make these unconscious elements visible, for example by reflecting on the existing psychological arrangements that produce this data. This is already evident in the productive application of the concept of apophenia by Klaus Conrad[3] or in Jacob Johanssen's research on ‘data perversion’[4]. Analogous to my predecessors, I place a data-specific extension of Jacques Lacan's sadistic schema into ‘data sadism’ at the centre of my consideration [5]. The concept of data sadism aims to show that alongside all rational decisions in data production, there are always irrational elements involved that are completely unreasonable in contrast to the desired data neutrality. In the portmanteau word data sadism, the idea of a sadistic desire is transferred to the human will to produce data. The thesis is that in addition to conscious motivations behind data abstraction, such as epistemic, economic or power-political motives, there are also unconscious dimensions behind the pursuit of data that coincide with pleasure principles. The question of why we create data and what desire lies behind data abstraction should thus become more describable and the program of a psychodynamic data critique.


30 min discussion and q&a

 

15:30 Coffee break

15:45 Session 3

 

3.1 Katie MacKinnon, Nanna Bonde Thylstrup, Esmée Colbourne (University of Copenhagen)

Arctic Archives: the politics of data mobility and migration in GitHub's Archive Program

On February 2, 2020, GitHub curated an algorithmically selected “greatest hits” collection of 17,000 data repositories, migrating them from GitHub servers to a deep time archival format known as Piql Film. This data was then distributed to four locations: the Bodleian Library in Oxford, the Bibliotheca Alexandrina in Egypt, Stanford Libraries in California, and the Arctic World Archive in Svalbard, Norway. To ensure preservation for 1,000 years, the open- source software code underwent several transformations. GitHub’s success as a platform relies on its dedicated user base of developers who continuously monitor, update, and maintain the code. Leading up to the archival deadline, developers were informed of the necessary conditions for inclusion: all repositories with dependencies had to be included in the default branch, and no bugs or vulnerabilities could be fixed after this date. Once selected, the code was shifted to an offline, archival format, rendering it inaccessible. This paper uses the GitHub archive as a starting point to explore the politics of data mobility and migration.

Moving beyond abstract notions of data flows or enclosures, we argue that the GitHub archive, in its various physical forms, demonstrates how data migration processes involve “the artful connecting of time, space, material, and immaterial elements into a ‘mobility effect’” (Coopmans, 2006). These processes challenge our understanding of the materiality of digital data and their geopolitical and socio-material entanglements. First, by examining the data migration issues of the GitHub archive, we complicate narratives that view data mobility solely in terms of journeys or replications. Instead, we advocate for a perspective that considers both aspects in a situated and medium-specific manner. Secondly, we use this empirical case study to propose new theoretical insights into the politics of data mobility, by integrating theoretical perspectives from geography on mobility, motility and migration with critical data studies. This paper thus offers conceptual and empirical contributions to ongoing discussions in critical data studies on the geopolitics of data and the lifecycle of data as it replicates and circulates across time and space.


3.2  Megan Leal Causton, Rocco Bellanova, Lucas Melgaço (Vrije Universiteit Brussel)

Curating and Deleting: Archival frictions in European security's data infrastructures

This study investigates archival frictions in the governing of those data (and their) infrastructures at the core of European law enforcement cooperation. Critical approaches to criminology and security studies often discuss the algorithmic system’s voracity without paying much attention to the socio-technical, legal and political implications of data curation and deletion in actual instances of big (personal) data. Drawing on insights from transdisciplinary scholarship on data practices, we unpack the controversy about curation and deletion that unfolded from 2019 to 2024 between the European Union Agency for Law Enforcement Cooperation (Europol) and the European Data Protection Supervisor (EDPS).

To delve into the intricacies of these frictions, this contribution brings together conceptual perspectives and tools of archival and information studies, criminology, and critical data studies. Archival friction refers to the tensions, conflicts, or challenges that arise within archival practices due to various factors such as organizational policies, technological limitations, legal regulations, ethical considerations, and cultural dynamics. Focusing on the Europol/EDPS controversy, we explore how archival frictions participate in the governing of European law enforcement data infrastructures, and how they shape relations of power between institutions. Our ambition is to empirically test and illustrate the epistemic potential of a transdisciplinary approach pivoting on archival studies, and notably what vantage points it can offer to those literatures in criminology and critical security studies that already focus on European law enforcement. Through this analysis, we aim to empirically demonstrate the value of a transdisciplinary approach rooted in archival studies and digital humanities/media studies. This perspective offers new ways of uncovering and understanding the socio-political and legal dimensions of data governance. Ultimately, this work in progress contributes to discussions on data power, politics and transparency (for justice and accountability) by foregrounding the complexities of archival (information) practices in shaping the life of data in European law enforcement.


3.3  Christoffer Koch Andersen (University of Cambridge)

Embodying Error: (Un)liveable Data Lives and the Possibilities of Trans Data

We are situated within an infinite data flow, where data is presumed to assemble, circulate and flourish with the aim of exponentially improving and enhancing our lives, but to trans people, this promise is not a given; for trans people, data spaces can be deathly. Contrastingly to the neoliberal belief in data as the global life optimiser, in this paper, I pivot a critical alternative way of thinking about what ‘data life’ entitles as I ask: What if data (and the privilege of producing data) instead of mainly optimising our lives, materialise as the violent decision over who can live and embody a liveable data life, while subsequently deciding who then must die based on colonial and cisnormative legacies of power inscribed into algorithmic systems? Overlooked are the ominous and strategized techniques of how data produce (or do not produce) information about our identities and lives that extend legacies of colonial and cisnormative violence which destroy non-white non-cisgendered subjects that do not fit into the binary template of life or the binary code of possibility. Rather than promoting sustainable forms of liveability, the formation of data lives – from the inherent sociopolitical entanglements and power relations – automatically invoke the white and cisgendered subject as the datafied default in the machine (Amaro 2023; Benjamin 2019; cárdenas 2017; Hicks 2019; Keyes 2018; Scheuerman, Pape & Hanna 2021), resulting in the imprinting of racialised trans subjects as faulty ‘errors’ to utilise their deaths as variables in classificatory algorithms and data representation to determine who becomes able to embody a legitimate data life. Based on these crucial observations, this paper investigates the properties of ‘data life’ twofold by (1) mapping the complex legacies of gendered injustices, how they manifest in the contemporary algorithmic violence and bind the production of normative data lives at the expense of trans data lives, and (2) exploring how trans people – trapped within the binary codes of the representative data life – inhabit a liminal yet powerful space between the liveable/unliveable where the condition of ‘error’ and the state of acceptability of data lives in algorithmic systems are resisted, contested and reinvented. Importantly, this paper adds a dimension to critical data studies that focus on exposing the violent politics of data and power relations underlying the coded injustices disproportionately directed at non-normative trans data lives with the aim of utilising this precarious data position to reimagine the possibilities of trans data production, digital liberation and liveability beyond deathly binary data infrastructures.


3.4  Vanessa Ugolini (Vrije Universiteit Brussel)

(Against) the Death of Data

The ambition to algorithmically analyse large amounts of personal information is a hallmark of national and international security. Persons of interest are increasingly inferred through a plethora of algorithmic techniques, from data mining to anomaly detection, used to identify suspicious patterns among datasets. This forward-looking approach in the conduct of law enforcement investigations is underpinned by the belief that datasets provide invaluable insights that inform decision-making across many aspects of everyday life. With the aim to cultivate a more nuanced understanding of the practices related to the collection, storage and processing of data at the EU level, this paper addresses questions surrounding the socio- political and socio-technical conditions of the ‘death’ of data. Despite the growing attention to the material constitution of datasets as repositories of future knowledge, the death of data remains a somewhat blind spot of critical approaches to securityand data studies. Yet, debates about the constitution and circulation of data are as crucial as questions about its trans- temporal nature. This is notably because provisions on data retention and deletion are emphasized by EU case law and have more recently become the focal point of major controversies among European institutions (e.g. between Europol and the EDPS). These provisions not only promise to regulate the political life of data, but essentially create an expanded (data) space which is generative of multiple forms of ‘death’. In particular, this paper discusses three forms: anonymisation, depersonalisation and complete erasure, as foreseen by the Passenger Namer Record (PNR) and Advance Passenger Information (API) Directives. The need to speak about data ‘afterlives’ and their political implication is particularly relevant in light of the latest provisions that incentivise the re-use and re- purposing of multiple sources of data across EU large-scale information systems fo security and border management. Therefore, by developing a critical security and data studies perspective on the death of data, this paper invites to think about non-linear understandings of its lifecycle, and to reflect on the decay of data and its expected erasure, which remain mostly overlooked – or at least not explored as part of the power structures associated with its production, circulation, management and use.


30 min discussion and q&a

 

17:30 Break

17:30 Keynote Rob Kitchin (45 min talk, 15 min Q&A)

18.30 Reception and networking