Are we being manipulated?
01/12/2026Online manipulation campaigns are notoriously difficult to trace. A team of Weizenbaum researchers has developed a method to understand their dynamics and their origins.
You might have heard of them before or encountered them directly: political influence campaigns such as those interfering in the US election in 2016, or in the UK during the Brexit vote, have gained a lot of media attention. Fears about disinformation campaigns were especially high during 2024, with many different national and international elections going on. But what many may not know is that such covert actions go beyond elections and are much more diverse than “just” spreading disinformation. Scientists from the Weizenbaum Institute’s Research Group Dynamics of Digital Mobilization are calling them Coordinated Social Media Manipulation (CSMM) campaigns; that is, the orchestrated activity of multiple accounts to increase content visibility and deceive audiences.
The effects, although not easy to measure, should not be taken lightly. CSMM campaigns can fundamentally distort how public opinion is formed or how political decisions are made. They try to influence a targeted audience and persuade them to adopt specific beliefs or behaviors. This often extends to shaping ‘second-order beliefs’—perceptions about what others believe, such as the prevalence of opinions in a population.
The strategies? Versatile. Among them, boosting, which involves heavily reposting content; pollution, defined as flooding a community with content, causing the social network to shut down; and bullying, understood as ‘organized harassment’. Another approach is to increase polarization in the target audience by amplifying content for opposing sides of a conflict. Some campaigns go for distraction by flooding social media with unrelated information to divert attention from pressing issues.
The actors behind such strategies can be political—either government-backed or independent—or simply profit-driven corporations or individuals.
“We know that authoritarian regimes frequently employ CSMM. Campaigns have been successfully linked to the Chinese government, Russia, or the Saudi Arabian regime. However, social media observatories today argue that states across different political regimes are also expected to make use of digital coordination to manipulate online discourse,” says Daniel Thiele, who co-leads the research group Dynamics of Digital Mobilization at the Weizenbaum Institute.
Non-political campaigns have been identified that manipulate cryptocurrencies or lobby for a coal mine, showing that sometimes commercial and political motives overlap. Scientists have also uncovered coordinated link sharing in support of the far-right political party Alternative für Deutschland (AfD), but could not verify who was actually behind the campaign. That gets to the heart of the problem: while there is rampant research on detecting such dynamics, it is still very difficult to find out the principal actors behind them. It’s what researchers call “the attribution problem”—linking an observed campaign to the covert actor who commissioned it.
To help tackle this problem, researchers from the Dynamics of Digital Mobilization research group conducted a comprehensive literature review of 62 journal articles, preprints, and conference papers on this topic from different disciplines. Based on this body of work, they developed a model to help understand the motives and strategies behind CSMM campaigns to then get closer to identifying who’s behind them.
“Attributing such campaigns is difficult because they are usually outsourced,” explains Thiele. “CSMM services can be purchased through private companies employing human trolls as well as networks that sell bot and fake accounts.”
Plus, researchers—unlike platform operators or intelligence agencies—have limited access to relevant data. “Most of the studies identifying campaigns rely on datasets released by Twitter/X. Meta and Reddit provide far less access to such data. Of course, only detected campaigns can be analyzed, and, as many CSMM campaigns remain undetected, we still have gaps in our understanding of them,” explains Baoning Gong. She researches online networks of right-wing extremists and also contributed to the study.
So far, analyses of data released by authorities, platform operators, or obtained by leaks have made it possible to identify such dynamics. And detection algorithms often employ network analysis to identify communities of accounts that share similar content at the same time, repeatedly. But that doesn’t prove the manipulative intent behind such actions. CSMM campaigns try to maximize their influence while making sure no one knows who they are. That’s also why the full extent of all the coordinated manipulation campaigns out there can’t be assessed.
To address this issue, the research team identified three key observable characteristics of such campaigns—scale, elaborateness, and disguise—that could lead to more information about their origins. The scale is defined by the number of coordinated accounts, posts, and platforms used, as well as the campaign duration. The more elaborate the content that is shared, the higher the persuasiveness of the campaign. Less elaborate content includes spam, short message fragments, or the coordinated reposting of links. Highly elaborate campaigns have carefully crafted narratives and appeal to political identities. “Until recently, implementing something like this required a lot of resources, but new advances in large language models (LLMs) could potentially make this a lot easier—and cheaper,” explains Annett Heft, Professor of Far-Right Extremism Research with Focus on Media and Public Spheres, and another co-author of the study.
Campaigns also vary in the sophistication of their disguise techniques. Some campaigns rely on fake accounts with generic profile bios or pictures. More advanced techniques involve corrupting genuine accounts. At the content level, disguise involves increasing elaboration to enhance credibility or using deletion to obscure traces of activity. At the macro level, campaigns may blend with existing movements to camouflage their operations.
Based on these observable differences, the researchers from the Weizenbaum Institute constructed a typology of eight CSMM campaign types, illustrating how scale, elaborateness, and disguise reveal insights into the resources, stakes, and influence strategies of the actors behind manipulation attempts.
For instance, the scale of the campaign, the elaborateness of its content, and the sophistication of its disguise usually come with higher costs, which would hint at an actor that is able to command more resources.
“We believe that the effort put into a campaign’s disguise reveals what is at stake for the actors, should they be detected. It means they could be facing sanctions or costs to their reputation, or are subject to democratic scrutiny,” says Miriam Milzner, another co-author of the paper. Her research focuses on coordinated disinformation campaigns and the manipulation of social media debates.
Several campaign types can be illustrated with real-world examples. High-stakes mass-persuasion campaigns, for instance, include operations linked to the Russian government that targeted foreign publics by impersonating the #BlackLivesMatter movement. These campaigns spanned multiple platforms, persisted over long periods, and used refined emotional and identity-based appeals while closely mimicking genuine activist communities.
At the other end of the spectrum are low-stakes flooding campaigns, such as those observed during the Syrian civil war, where networks of accounts repeatedly posted identical messages with little attempt at concealment, suggesting minimal concern about being detected.
Between these extremes are targeted operations, such as a high-stakes niche-amplifying campaign attributed to accounts linked to the Chinese embassy in the UK. Here, impersonated UK users used repetitive language to boost the visibility of the ambassador, combining a narrow objective with a level of disguise likely driven by the sensitivity of manipulating a foreign audience.
Together, these examples show how the typology helps make sense of the diverse strategies behind coordinated manipulation campaigns and what they reveal about the actors who deploy them. The hope is that it will help future researchers to narrow down the sources of online manipulation campaigns. To make further progress, however, more high-quality ground truth data on manipulation campaigns is needed. The European Union’s Digital Services Act, which is now coming into effect, could play an important role in improving this database by increasing pressure on platforms to disclose manipulation campaigns detected on their services.
Looking forward, research on CSMM stands at a critical juncture. While detection methods and theoretical insights have advanced considerably, emerging technologies such as large language models are already reshaping how manipulation campaigns are produced, scaled, and disguised. Addressing these challenges will require closer collaboration across disciplines. Strengthening interdisciplinary collaboration will also be essential—which is what the Weizenbaum researchers aim for: to foster interdisciplinary dialogue and contribute to a more comprehensive understanding of CSMM.
Learn more:
- Read the research paper: Attributing coordinated social media manipulation: A theoretical model and typology
- Connect with the research group Dynamics of Digital Mobilization