Supporting automated detection of gendered disinformation
Background
Gendered disinformation (GenDis), understood as false or misleading content targeting individuals or groups based on gender identity or sexual orientation, has become a growing part of transnational influence operations and domestic political polarization. Recent research shows that gendered narratives are frequently used in large-scale information operations, including those supported by state-aligned actors such as Russia, where gendered content functions as a tool for polarisation and delegitimisation. Russian propagandistic messaging, not least in its war on Ukraine, has been marekd by frequent attacks on LGBTQ+ communities, framing gender related issues as a symbol of Western moral decay. These narratives travel across platforms and interact with broader misogynistic and anti-trans discourses that already circulate within online communities. Two gaps shape the current state of the art. First, GenDis remains conceptually underdefined. Second, while LLMs demonstrate strong capabilities in classifying difficult linguistic phenomena, including hate, implicit toxicity, and deceptive content, no existing approaches are tailored to the conceptual and linguistic properties of GenDis.
Motivation
No established approaches address GenDis as a specific category in AI-aided disinformation detection, flagging, or fact-checking. This project therefore develops an integrated conceptual framework, conducts an empirical study to test and extend existing typologies of GenDis news frames to social media frames, to then build a lightweight prototype for its automated detection. By focussing on GenDis as a form of an often neglected, yet prominent form of disinformation with attached individual- and societal-level risks, and facilitating its automated detection, the project supports efforts to protect targeted women and LGBTQI+ and repel attacks on democratic rights and freedoms.
Objectives
The project, led by Martha Stolze in collaboration with Dr. Kilian Bühling and Dr. Elizaveta Kuznetsova, aims to refine GenDis frames and develop an automated detection tool. It aims to extend academic, public and institutional understandings of gendered online harms and mis-/ disinformation, and most importantly, support improved detection practices for fact-checking organisations. Increased clarity regarding GenDis strengthens democratic resilience, consistent with broader research on digital manipulation. This short project can be the foundation for a joint larger postdoctoral project in which the framework can be subsequently expanded to more platforms and languages.