de / en
A group of ca. 30 people standing on a street, in front of the green door, the entrance Deutsches Haus at NYU.

Critical stances towards AI: For a critical and self-determined approach to digital technology

Researchers from both sides of the Atlantic gathered in New York City for a symposium on the current state of AI and digital technology and to reflect on the influence of Joseph Weizenbaum.

On September 28–29, 2023, scholars from Europe and North America gathered together in New York City to critically discuss the current state of AI and digital technology and share practical strategies for promoting human agency in the digital age.

Joseph Weizenbaum, the famous computer scientist pioneer and a leading advocate for a critical and self-determined approach to digital technology, would have celebrated his 100th Birthday this year. As Ricarda Opitz (Administrative Director of the Weizenbaum Institute) and Chistian Strippel (Weizenbaum Institute researcher) pointed out in the introduction to the symposium, Weizenbaum’s thought and criticisms are extremely relevant to our time and age, not to the least with the rise of new and powerful generative AI such as ChatGTP, DALL-E, Stable Diffusion, or Midjourney. Current discussions around AI are also very similar to those that ensued 50 years ago, when Weizenbaum was an active scholar. Back then, he already argued for human agency, self-determination and responsibility in the application of technology. He is also the reason for the interdisciplinary nature of the Institute, whose researchers made up the scientific program committee of the symposium and why its speakers all tackled AI from different angles. The five panels included political scientists, sociologists, computer scientists, economists, and data scientists from – among others – NYU, MIT, University of Toronto, Yale University or IMB Research, who discussed their work with alumni from Freie Universität Berlin and TU Berlin. 

The goal hereby was to use Weizenbaum's concepts as an opportunity to critically examine AI narratives and research, while focusing on ethical considerations and strengthening individual and collective agency. The topics ranged from platform algorithms, social and environmental harms of AI, working conditions of data workers to techno-feminist logics as a counter strategy.

The event was a cooperation with Freie Universität Berlin, Technische Universität Berlin and Deutsches Haus at NYU and was supported by the DWIH. The Hosts Juliane Camfield (Deutsches Haus at NYU), Ricarda Opitz (Administrative Director of the Weizenbaum Institute), Juliane Wilhelm (Alumni manager of the TU Berlin) and Jan Lüdert (Head of Programs at the German Center for Research and Innovation) gave the opening remarks and welcomed the participants. The symposium is part of the Weizenbaum Institute’s W\100 Jubilee Year in honor of Weizenbaum’s 100th Birthday, highlighting his work with events, publications and workshops throughout the year.

Audience photographed from behind. Ca. 10 rows of people looking to the front at Christian Strippel introducing the panel on algorithmic platforms

Algorithmic Platforms and Information Quality

"The Internet is a huge garbage dump, admittedly with some pearls in it. But to find them, you have to ask the right questions." (Joseph Weizenbaum)

One of the central questions of the symposium was how machine learning algorithms shape what we see and read online, particularly news on social media platforms, as they play a key role in the spread of information in digital societies.

In their panel, Felicia Loecherbach (NYU), Jeff Allen (Integrity Institute), and Jakob Ohme (Weizenbaum Institute) addressed questions on how to study the tailoring of (eventually problematic) information reaching individuals, the technical aspects controlling this process, how to improve bad algorithms, and finally, the aspect of user’s agency and responsibility in a networked public sphere. Discussions highlighted the importance of distinguishing between verified and unverified information in our information-rich digital environment, and tackled the question of how diversity can be achieved on platforms.

The audience photographed from the first row. A small room with around 40 people sitting on chairs.

Platforms, Ecosystems, and Our Digital Future

The panel by Volker Stocker (Weizenbaum Institute), Pinal Yildirim (Wharton School) and William Lehr (MIT CSAIL) offered a unique economic perspective on the subject matter. They delved into the intricate dynamics of hypergiant corporations, their platforms, and ecosystems within the digital economy. Topics such as automation, adverse effects on the workforce and distribution, data collection practices, the preservation of property rights, generative AI, and the use of AI systems by governments and companies were discussed.

The magnitude of AI's transformation of the world of work and labor markets remains somewhat uncertain, and the panel posed critical questions, including inquiries into the restructuring of organizations and institutions and strategies for maintaining humans “in the loop”.

The session also underscored the importance of well-thought-out AI regulation. They stressed that regulatory challenges are similar on both sides of the Atlantic and cannot be solved by non-coordinated and isolated regulatory solutions. Instead, coordinated efforts across different jurisdictions are imperative for the effective regulation of AI.

This panel was covered by Deutschlandfunk 

Profile of Sarah Sharma standing in front of the podium, sheets of paper in her hand, smiling. The slide behind her reads: Towards a Techno-Feminist Refusal. On the possibility of feminist techno-logics.

Towards a Techno-Feminist Refusal

"I have not found a compulsive woman programmer, and there must be a reason for that. Is there a connection between compulsive programming and the drive to play God? I think so and through my observations I am encouraged, even more than before, to believe that such a connection exists." (Joseph Weizenbaum)

Sarah Sharma, Professor of Media Theory at the University of Toronto, gave the symposium’s Keynote, which provided a fresh lens on the latest debates around Artificial Intelligence and feminist media theory. Using a series of striking examples, she deciphered the male logic of the tech world and argued in contrast for feminist techno-logics that would challenge the dictates of racialized and capitalistic time and space. Sharma’s approach aims to understand power dynamics in all of society, not only digitalization, and proposes to change our form of engagement by invoking temporalities, spatialities, rhythms, capacities, and mobilities that are incompatible with one's utility within the family, the workplace, the market, and Big Tech.

Close up shot of the audience. Two rows of people are looking to the front, two people standing in the background.

Joseph Weizenbaum: Past and Present of Critical Thinking on AI

The panel by Christian Strippel and Alexandra Keiner focused on Joseph Weizenbaum biography, his professional life as a scientist and public intellectual, his work, and his intellectual contexts.

Strippel explored the complex biographical and historical events that shaped Weizenbaum's life and thought, something he has been researching with Magnus Rust from the University of Basel, currently a Fellow at the Weizenbaum Institute. He began with Weizenbaum's childhood, youth, and his departure with his family to the United States at the age of 13 due to the rise of the Nazis in Germany. After studying mathematics in Detroit, Weizenbaum went on to become a renowned computer scientist and social critic.

Alexandra Keiner focused on a specific aspect of Joseph Weizenbaum's work, namely his critique of “instrumental reason” in the context of computers and AI, following critical theorist Max Horkheimer. She outlined two central arguments for why the critique of instrumental reason applies to computer science. First, the premise of technical problem-solving, which is “based on the natural sciences’ belief in human superiority over nature”. And second, the importance of asking the “right questions” rather than proving technical feasibility, which captures the political and social dimensions of technology and AI research today.

Social and Environmental Perspectives on AI

Certainly no computer can be made to confront genuine human problems in human terms” (Joseph Weizenbaum)

The panel manned by André Ullrich, Gergana Vladova, Dave Rejeski (Environmental Law Institute) and Caroline Woolard (Open Collective), discussed the social and environmental effects of AI, specifically the ecological limits of increasing digitalization and economic growth and the challenges of dealing with (human) biases that have made their way into AI.  

While problems such as social ex-/inclusion, (in)equality, and environmental harms are not new, as the presentations showed, new dimensions have been added to them in the context of digitalization. Technology plays a decisive role as a mediator at the interface of society and the environment. The cloud industry now has a greater carbon footprint than the airline industry, and a single data center can consume as much electricity as 500,000 homes. AI thrives in those areas where humans can be used as data producers and cognitive subunits of large distributed computing networks. 

Dave Rejeski focused on the role of science and the responsibility of scientists. How do we increase trust in science? Referring to Joseph Weizenbaum, he emphasized the responsibility that scientists carry for their impact on society as a normative anchor and stressed that it is important to take a critical and responsible role in the design and development of AI.

Three women sitting on chairs next to each other. Milagros Miceli in a black and white shirt. Alexandra Keiner in the middle with a microphone in hand and a laptop on her lap. Adriana Alvarado with a yellow shirt on the right.

The Labor that Fuels AI

The final panel of the symposium ended with a strong plea for more responsibility to highlight and make visible power structures and inequalities created by AI.

The fireside chat between Milagros Miceli (Weizenbaum Institute) and Adriana Alvarado (IBM), moderated by Alexandra Keiner took a closer look at the labor-intensive nature of AI and the implications of labor conditions on AI systems. They stressed that it is central to create awareness on the role of data workers, who generate, produce, label, annotate or verify data, which is a prerequisite for the use of AI. This work is mostly outsourced to the so-called global south, where it is performed under poor working conditions.

While a few initiatives have been started, both in politics and in the realm of representation of data workers, there is still a lot to be done also on part of the researchers, who need to be aware of the limits of technology as well as the type of data they use. The discussants considered the issue of bringing data experts and data workers together, with their different perspectives, tasks and goals.

An important reminder, that the way that AI is produced affects us all, and that the focus should be on what is behind the technology, not just its potential.

For more on this topic, see the interview with Milagros Miceli.

A group of ca. 30 people standing on a street, in front of the green door, the entrance Deutsches Haus at NYU.

Weizenbaum’s Legacy and the Future of Self-determination and Responsibility

The "Critical Stances Towards AI" symposium served as a fitting tribute to Joseph Weizenbaum's legacy, fostering transatlantic dialogues on AI's many challenges and responsibilities. Weizenbaum’s work and attitude towards Artificial Intelligence, especially his sober view and criticism of people's blind faith in technology, were just as much part of the program as his guiding principles of self-determination, sustainability, and modes of dealing with technology for the common good. 

The symposium also highlighted the necessity of multidisciplinary research to comprehensively and meaningfully assess the impact of AI-based technologies on society and the economy. Such research helps uncover critical trade-offs and provides policymakers with the insights needed to make informed decisions.

The discussion not only underscored the urgency of addressing the ethical, social, and environmental dimensions of AI development, but also focused on the responsibility of science, society, and the users by highlighting a variety of options for action and design. The general consensus was that there is much to be done and many opportunities to be taken.  

A tour through Greenwhich devoted to the life and history of Jewish immigrants in the 1930s in New York City, led by New York-based historian Christopher Medalis, PhD., closed the event.

Thank you to all who contributed to this event:


Jakob Ohme (WI); Felicia Loecherbach (NYU), Jeff Allen (Integrity Institute), Volker Stocker (WI); Pinar Yildirim (Wharton School), William Lehr (Massachusetts Institute of Technology), Sarah Sharma (University of Toronto), Christian Strippel (WI); Alexandra Keiner (WI); André Ullrich (WI); Gergana Vladova (WI); Dave Rejeski (Environmental Law Institute), Caroline Woolard (Open Collective), Milagros MIceli (WI); Julian Posada  (Yale University), Adriana Alvarado Garcia  (IBM Research) 

Scientific Program Committee

Alexandra Keiner, Milagros Miceli, Claudia Oellers, Jakob Ohme, Volker Stocker, Christian Strippel, André Ullrich, Gergana Vladova. All are Researchers or Fellows at the Weizenbaum Institute.

Organising Committee

Bettina Klotz (TU Berlin), Juliane Wilhelm (TU Berlin), Anna Meißner (FU Berlin), Franca Brand (FU Berlin), Annegret Kunde (WI), Sara Saba (WI)

Further reading

Floyd, C. (2023). From Joseph Weizenbaum to ChatGPT: Critical Encounters with Dazzling AI Technology. Weizenbaum Journal of the Digital Society, 3(3).

Pörksen, B. (2023). The Image of Man in Artificial Intelligence: A Conversation with Joseph Weizenbaum. Weizenbaum Journal of the Digital Society, 3(3).

Sharma, S. (2020). A Manifesto for the Broken Machine. Camera Obscura, 35(2), 171–179.

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment To Calculation. W. H. Freeman

You can find another report of the symposium by the TU Berlin here: