Working with ChatGPT: “Efficiency depends on us”

A new Weizenbaum research project called “ELIZA reloaded” explores the impact of generative language models on work processes. We spoke with researcher Ann Katzinski about expectations, dangers, and the importance of skill development.

The debate about job loss due to so-called “Artificial Intelligence” is not new, but we continue to revisit it, especially now with the emergence of ChatGPT and similar systems. Will generative AI take away our jobs?

First of all, the number of people in the workforce in Germany continues to rise, despite several decades of automation. Simultaneously, we have a significant shortage of labor. So, it appears paradoxical that there is so much talk about job loss due to AI. Previously, discussions mainly revolved around "replacing'' repetitive, straightforward tasks. Now, the focus is on new and often creative processes such as image generation, music, poetry, and writing. Indeed, certain aspects of tasks in fields like programming, marketing, and design could already be substituted. However, the efficiency of generative, language-based AI systems depends on us and the prompts we provide. Therefore, AI systems don’t make us unemployed; instead, they create new demands in the working world and contribute to a transformation of job profiles and career paths.

In your project, you are researching how work processes change through generative language models. What specifically will you be investigating?

In our project “ELIZA reloaded” we are studying how knowledge work is changing through the use of generative language-based AI systems, specifically ChatGPT. The focus here is on three professional fields: programming, science, and coaching.  

The goal is to understand the expectations and concerns of individuals in these various professional groups and the specific benefits they gain from using ChatGPT. Additionally, we will examine the potentials and risks associated with the technology and how ChatGPT alters the respective work processes and professional roles. We will be conducting interviews with professional associations, analyzing policy documents and position papers. Finally, we’ll simulate various knowledge-intensive tasks in a controlled experimental setup at the workplace, testing them with and without generative AI systems. 

How should we consider AI in the context of current labor market changes, specifically labor shortages?

In general, the current development of AI systems shows that they can help alleviate labor shortages to some extent. They have the potential to support us in our fields of work, automate certain tasks, and ease our workload, especially in administration. Companies that effectively use AI systems can overcome shortages in specific areas and increase productivity. However, employers and decision-makers must ensure that the introduction of AI systems is tailored to support employees. Transparent implementation processes and tangible relief are crucial for gaining acceptance among employees.

That sounds promising. But what about productivity? Does generative AI actually make us more productive? And what dangers lurk behind the use of AI in the workplace?

Generative AI systems can definitely make us more productive and expand our range of possibilities. They can assist us in tasks such as analyzing large datasets, facilitating work across language barriers through automated real-time translation, or structuring communication and processes through the use of voice assistants and chatbots, thus relieving us of some work. However, the dangers arise when AI systems are used without critical reflection and with unrealistic expectations. To formulate meaningful prompts and interpret the outputs of AI systems correctly, trained judgment and permanent critical reflection are needed in "human-machine" cooperation, otherwise there is a risk of short circuits and possible hallucinations of the AI systems. This means that if, for example, as a lawyer, I ask ChatGPT about legal precedents, the system may invent them by assembling information from various cases, or it may draw erroneous conclusions when summarizing scientific medical articles, which could be adopted uncritically. These systems cannot conduct thorough research in the relevant databases.

Then there is also the risk of further monopolizing the tech industry and the question of codifying hegemonic knowledge through AI systems: Which dominant forms of knowledge and perhaps worldviews are embedded in the algorithms? Whose value system is programmed into the AI systems? Issues like discrimination, as well as concerns about work intensification and surveillance, play a central role here. 

Are there specific industries or wage segments more affected than others?

What's new in the current discussion is the extent to which intellectual work can benefit from collaboration with AI systems. Here, we observe an increasing use that has prompted us to question the changes in knowledge work processes. In our project, we are examining precisely that. Take coaching as an example; we see that the industry is generally open to AI systems like ChatGPT, as they anticipate becoming more efficient in individual coaching and training through AI.

What should employees, works councils, or unions look out for when generative AI is introduced into their workplaces?

Unions and works councils should get involved in the implementation processes right from the start! Decisions about the use of generative AI systems shape both the level of employment numbers and working conditions. It should be negotiated that these systems are genuinely used as tools to make work easier. If certain tasks are to be substituted, it's important to make agreements regarding job security and potential retraining of affected employees. Furthermore, it's essential to sharpen the focus on the importance of human skills in collaboration with AI systems and compensate accordingly. Framework agreements in which the general objectives of using AI systems are described can be a helpful tool. They should clearly state what negative effects, such as discrimination, surveillance, and work intensification, should be avoided.

What policy measures could address the dangers of AI in the workplace?

Given the rapid development of AI systems, continuous monitoring and regulation by political authorities are of great importance. The AI Observatory of the German Federal Ministry of Labor and Social Affairs (BMAS) makes it possible to monitor developments on the labor market and to respond to possible risks and challenges at an early stage, intervening if necessary. Regulation of AI systems through the EU’s AI Act, standardization committees, and workers' participation are also crucial. Workers' participation has been strengthened by the recently passed Works Council Modernization Act, allowing external experts to be consulted for advice on technical matters.

Thank you very much for the interview!


Ann Katzinski is a research associate in the research group “Working with Artificial Intelligence”. In the project “ELIZA reloaded”, she conducts research together with Florian Butollo, Anne Krüger, Jennifer Haase, and Maximilian Heimstädt on the transformation of knowledge work through the use of generative language-based AI systems. The focus is on three areas: programming, science, and counseling.

She was interviewed by Leonie Dorn.

artificial&intelligent? is a series of interviews and articles on the latest applications of generative language models and image generators. Researchers at the Weizenbaum Institute discuss the societal impacts of these tools, add current studies and research findings to the debate and contextualize widely discussed fears and expectations. In the spirit of Joseph Weizenbaum, the concept of “Artificial Intelligence,” is also called into question, unraveling the supposed omnipotence and authority of these systems. The AI pioneer and critic, who developed one of the first chatbots, would have celebrated his 100th birthday this year.


Zur vorherigen Seite