de / en

Goda Klumbytė

Research Fellow

About

Goda Klumbytė investigates possibilities and applications of community and participatory audits of AI systems. Together with David Hartmann, they will look into existing methodologies of such audits and investigate its impacts and use for marginalised user communities.

Goda's research engages human-computer interaction, participatory design, intersectional feminist approaches and critical algorithm studies. She is currently working on critical approaches to explainability in AI within the project “AI Forensics” (funded by Volkswagen Foundation) at the University of Kassel. In her doctoral research, she investigated epistemic premises of machine learning as a knowledge production tool and proposed innovative ways to work with intersectional feminist epistemologies towards more contextualized and accountable machine learning systems design. She co-edited books More Posthuman Glossary (with R. Braidotti and E. Jones, Bloomsbury, 2022), Posthuman Convergences (with E. Jones and R. Braidotti, forthcoming with Edinburgh University Press) and published in journals Online Information Review, Digital Creativity and ASAP, as well as presented at informatics conferences such as ACM’s CHI, nordiCHI and FAccT.  .

Research Group: Data, Algorithmic Systems and Ethics (01.08.2025 - 30.09.2025)

Contact

Organisation
University of Kassel

Fields of Research

Participatory and community audits of AI systems

More Projects

Co-Auditing the Unknown: Developing AI Auditing Resource for NGOs/CSOs

Weizenbaum Institute, August-December 2025

Project convenors*: Goda Klumbytė (Weizenbaum Fellow) and David Hartmann

Project advisors: Milagros Miceli (Weizenbaum Institute), Hendrik Heuer (CAIS)

This short-term project pursues participatory development of an auditing resource of AI and generative AI systems for non-governmental organisations (NGOs) and civil society organisations (CSOs), particularly in cases where access is limited to public interfaces or input/output results (the so-called "black-box audit"). The goal is to develop an online resource that would support NGOs/CSOs in the EU working broadly across fields of digital rights, including privacy, transparency, and algorithmic governance, in understanding and auditing contemporary AI systems.

The project consists of interviews with a selected number of CSO/NGO representatives from Germany and the EU to better understand the needs of organisations with regards to understanding and auditing AI systems, a participatory co-design workshop with NGO/CSO representatives to prototype the auditing resource, and developing and launching the resource in the last stage. This project thus contributes (1) empirical insights into the needs and practices of EU NGOs/CSOs regarding AI evaluation and audits, and (2) a participatory co-designed prototype resource that translates black-box auditing methods into actionable practices for civil society.

*Both authors contribute collaboratively to the project