de / en

Veranstaltung

21.06.2023

16:00 Uhr - 17:30 Uhr | Auditorium at the Jacob-und-Wilhelm-Grimm-Zentrum, Geschwister-Scholl-Straße 1-3, 10117 Berlin

VIDEO Sandra Wachter - The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law

As part of our Distinguished Fellow Program, Prof. Sandra Wachter of the Oxford Internet Institute presented her work on Discrimination and AI.

Artificial intelligence is increasingly used to make life-changing decisions, including about who is successful with their job application and who gets into university. To do this, AI often creates groups that haven’t previously been used by humans. Many of these groups are not covered by non-discrimination law (e.g., ‘dog owners’ or ‘sad teens’), and some of them are even incomprehensible to humans (e.g., people classified by how fast they scroll through a page or by which browser they use).

This is important because decisions based on algorithmic groups can be harmful. If a loan applicant scrolls through the page quickly or uses only lower caps when filling out the form, their application is more likely to be rejected. If a job applicant uses browsers such as Microsoft Explorer or Safari instead of Chrome or Firefox, they are less likely to be successful. Non-discrimination law aims to protect against similar types of harms, such as equal access to employment, goods, and services, but has never protected “fast scrollers” or “Safari users”. Granting these algorithmic groups protection will be challenging because historically the European Court of Justice has remained reluctant to extend the law to cover new groups. In her talk, Sandra Wachter argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved.

Her talk was hosted by the Weizenbaum Institute and the European New School of Digital Studies at the Europa-Universität Viadrina.

Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.

At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies. She will be a Distinguished Fellow at the Weizenbaum Institute.