de / en

Research Agenda

Research Group 18

The research group “Quantification and Social Regulation” investigates whether and how regulation changes when it makes use of contemporary automated information and decision-making systems. Ubiquitous computing, Big Data and Artificial Intelligence entail new practices of quantification and valuation which yet need to be characterized as modes of regulation and governance and assessed with regard to their democratic implications. The research group undertakes this endeavor by combining perspectives from social science and computer science.

Regulation can be understood as the intentional steering of individual and corporative actors’ behavior by state and non-state organizations in order to attain a pre-specified goal (Black) and is a key component of modern social life. Societies can thus be characterized with respect to the specific tools of regulation that they use. In the last decades, scholars have drawn attention to the historical ties between, on the one hand, modern statehood and modern capitalism and, on the other hand, practices of quantification: processes such as the development of statistics as a tool of statecraft (Hacking) or the role of accounting in modern governance (Miller) have been well documented by social scientific research.

Yet in the face of profound technological developments in the last ten to twenty years we have to ask whether what we know about quantification and regulation still holds true or whether new technologies have changed the rules of the game. Is “governing by Big Data” different from “governing by numbers”? As of today digital technologies pervade modern societies at all levels: while the individual lifestyle and life decisions are increasingly guided by online environments and digital devices, organizations adopt algorithmic procedures and Artificial Intelligence to optimize their workflows. This holds true for both companies and state actors, including political parties, public administration and courts. Modern computer technologies seem to render regulation more encompassing, tailored and effective – simply more powerful. Yet such assumptions need to be validated against empirical analysis. Technological change is no force of nature; it is shaped by the very social contexts that it tries to order. Only empirical studies can reveal how automated information and decision-making systems work in a specific institutional context; how they are affected by organizational structures, power relations as well as professional norms and identities, all the while having an impact on these factors.

In order to evaluate the novelty of these modes of regulation, we analyze concrete cases where quantification practices are deployed in regulation – both by state and non-state actors – with a special focus on Big Data and Artificial Intelligence. With regard to the state, we study cases from the executive, the legislative and the judicative branches. The project’s approach is comparative: It scrutinizes similar technologies in different fields of application, the use of (semi-)automated information and decision-making systems in different policy fields and similar use cases in different countries. Concrete examples of use cases are, among others: Big Data and AI in policing (e.g. predictive policing), in social policy (e.g. child abuse prediction), for jurisdiction (e.g. predictive sentencing), for policy-design (e.g. agile policy-making, sentiment analysis, computer simulations) and for campaigning (e.g. political micro-targeting). In addition, we look at how data-based regulation is used in various societal realms such the management of labor, credit markets and digital self-tracking.

The research group’s analysis of these cases covers three dimensions:

  • the process of generating information by computer technologies,
  • the use of computer technologies in defining the goals, addressees and means of regulation, and
  • the use of computer technologies for behavior modification.

Computer technologies generate information that is deployed as a resource for regulation. To what extent do social media, smartphones, sensors and other means of observation produce data about populations and social relations that are new in quality and/or in quantity? Where do we witness new ways of reading and modeling data, such as predictive analytics and machine learning, and where not? Which forms of knowledge attain the status of evidence for policy making and what conflicts arise around what is considered as legitimate evidence? Do Big Data analytics, “demos scraping”, sentiment analysis and computer simulations contribute to new political epistemologies?

In addition, we study the role of computer-based information processing in defining the goals, the addressees and the means of regulation. How are processes of valuation and evaluation transformed through digital technologies? What are the criteria by which automated decision-making systems operate, and who defines them? In principle, information and communication technologies enable new forms of participatory norm setting; to which extent are they realized? Is public deliberation about norms increasingly replaced by automatic standard setting? And how does the nature of regulation change when both the goals pursued and the means used can be automatically adjusted to changing contexts?

Finally, the group investigates the role of Big Data and AI as tools to influence behavior in regulatory processes. Which new ways of interaction, control and self-control do technologies like wearable devices or cyber-physical systems enable? What intentional and unintentional effects do different technological architectures have on human behavior? Which consequences do these new regulatory instruments bear for individuals? Do automated systems incorporate strategies of reflexive governance through responsive technologies?

Along with the analytic description, the project provides a normative assessment of the current use of automated information and decision-making systems and develops possible alternatives. To do so, the group draws from democratic theory and recent developments in algorithmic accountability and explainable AI. The aim is to shed light on various normative dimensions and criteria that need to be taken into account when applying Big Data or AI in regulation. Such normative dimensions can be legitimacy, fairness, accountability, accuracy, effectiveness, IT security and others. On this ground, the project will develop alternative visions and versions of Big Data-based and AI-based regulation.

The research group is composed of scholars from political science, sociology and computer science and positions itself in the research fields of governance and regulation research, public policy research, the sociologies of classification, quantification and valuation as well as in science and technology studies, critical computer science and critical algorithm studies. For more information about case studies and publications, please visit the team’s individual websites. We are interested in collaborative projects with scholars who work on related topics, with a pronounced interest in case studies about automated information and decision-making systems from various countries and regions that allow for comparative analyses.