Studie: KI kann Wahlverhalten beeinflussen – EU-Regeln unzureichend
03/19/2026KI-Systeme greifen zunehmend in die Meinungsbildung ein – oft unbemerkt. Eine neue Studie zeigt nun, dass bestehende Gesetze wie der AI Act dafür kaum gewappnet sind. Die Ergebnisse wurden von Forschenden des Weizenbaum-Instituts im Fachjournal Communications of the ACM veröffentlicht.
Large Language Models (LLMs) powering AI applications are increasingly serving as information "gatekeepers." Stefan Schmid, Principal Investigator at the Weizenbaum Institute, and Adrian Kuenzler of the University of Hong Kong (formerly a Fellow at the Weizenbaum Institute), investigated how these models transmit bias, the societal risks involved, and where regulatory frameworks must be strengthened.
AI as an Opinion Leader
Language models are the backbone of countless digital applications—ranging from chatbots and virtual assistants to complex decision-making systems in the workplace. The study demonstrates that these systems carry multiple biases. Their outputs are based on patterns within training data preferencing specific worldviews and values. Furthermore, AI systems are often configured to reinforce a user’s existing biases or filter out certain types of content.
"The potential for Large Language Models to subtly influence political opinions and voter behavior poses a serious threat to public discourse and our democracy," explains Stefan Schmid. He notes that this influence is often subliminal, making it difficult for users to recognize they are being swayed.
Legislative Gaps: The AI Act and DSA Under Scrutiny
The study provides a critical analysis of current European legislation, specifically the Digital Services Act (DSA) and the AI Act. The authors conclude that these laws only address communication bias in AI as a byproduct of broader safety and content moderation measures. The focus remains on preventing "obvious" harm, while the subtle distortion of public discourse and democratic processes through bias in LLMs is largely neglected. Additionally, the market dominance of a handful of AI companies creates a further risk, potentially narrowing the diversity of perspectives in the digital sphere.
The Call for a Comprehensive Regulatory Approach
To effectively protect against discrimination and polarization, Schmid and Kuenzler propose broadening the regulatory scope. They argue that combining content moderation, competition, value chain regulation, and technical design governance is crucial in fostering diverse and transparent AI systems that mitigate bias while promoting a balanced digital information ecosystem.
Study: Communication Bias in Large Language Models: A Regulatory Perspective