Disinformation: “The most vulnerable communities are difficult to reach”
Propaganda researcher Elizaveta Kuznetsova explains why the flood of misinformation since the viral launch of ChatGPT should not be our biggest worry, and that the efforts of fighting disinformation need to be more equally distributed.
In in times of digital election campaigns and geopolitical crises, disinformation is considered a powerful weapon. Worries have especially risen since the wide availability of image and text generators. Elizaveta Kuznetsova is an expert on propaganda and disinformation at the Weizenbaum Institute. She spoke to us about the complex challenges in tackling fake news and what chatbots could do to mitigate the problem.
How long have you been researching disinformation, what is your focus?
Elizaveta Kuznetsova (EK): In one way or another I’ve been researching disinformation for a very long time – ever since my PhD, in which I analyzed Russia Today, the main and most famous Russian propaganda channel. I looked at how they interpret international politics and frame news events in order to counter Western media. At the time, I mainly worked with the terms of propaganda and used frame analysis to study what messages Russia promotes abroad.
But that was more of a classical content analysis, I wasn't focused on any technological aspects yet. It was later that I started looking at social media algorithms and how they deal with Russian propaganda. The first study that covered the digital realm looked at how Facebook handled content sponsored by the Russian Government during the US election in 2020 – if they had learned the lessons from 2016.
More recently here at the Weizenbaum Institute, I've been looking at the role of generative AI in disinformation, among other things.
How do you research disinformation, what are your methods?
EK: We use mixed methods in our research, ranging from qualitative text analysis to algorithmic experiments and computer-assisted text analysis methods. Classical content analysis means reading texts and analyzing facts, interpretations, etc. It is similar to what fact-checkers do. They read the story and investigate whether it is correct or misleading, and then they provide sources and debunk false statements.
The method that we use most is algorithmic auditing, a research strategy that focuses on the distribution side of problematic content. Content analysis can still be and often is part of the algorithmic analysis, but the main purpose of this method is to determine how specific stories are distributed on specific platforms, whether or not algorithms amplify or contain their spread.
What are the main factors that contribute to disinformation in digital spaces?
EK: It's a combination of social, political and technological factors, and this is what makes the problem so complex and so difficult to tackle. Disinformation in itself is not a new phenomenon. We’ve had rogue and not so rogue actors telling stories that are unverified since the beginning of communication history, whether for political reasons or not. But what happens now with the technological advancements is that disinformation can spread much faster and much wider. It is a transnational phenomenon that transcends state and ideological boundaries. And the problem with this amplified spread is that once individuals have been exposed to misinformation and tend to believe it, the research shows that this effect is long-lasting. It's not impossible to change people’s mind after the exposure, of course, but it's more difficult.
Then we have the social aspect that specific communities are more susceptible to misinformation than others. These are usually communities that are dissatisfied with the status quo, who harbor anti-system sentiments. They tend to be a bit more exposed and prone to believing in misinformation.
General education levels also matter, of course, even though we also see quite a high number of educated people believing fake stories. Media literacy plays a very important role here and not every educated person is media and algorithm literate.
What do we know about the actors that spread disinformation, or the topics where disinformation is often used?
EK: The widely accepted definition of disinformation, compared to misinformation, is that disinformation is false information spread with the intention to deceive. This makes a difference when we're talking about groups behind conspiracies like QAnon, for example, because they most likely do believe that what they're saying and spreading is true. It’s not exactly clear if they are rogue or living in a parallel information universe. The problem here is, that these certain communities that have their information silos are often especially targeted by rogue actors, like Russian propaganda, on top of that.
In general, we’re talking about actors that aim to change public opinion and use these narratives as a form of influence– whether for political or business interests.
There are disinformation topics that are more infamous or popular than others, like the Covid-Pandemic, climate change, the war in Ukraine and the Russian invasion, but also LGBTQ related issues.
Perhaps one of the most successful disinformation stories was fueled by the Soviet government about the Americans not landing on the moon. It really got a lot of traction back then – in the US and around the world – and many still believe it. It's also a conspiracy theory.
What are the effects of disinformation, which are you most worried about?
EK: It affects politics, that is, it affects how citizens vote, how people perceive the political environment and their government. It is a very important part of social life and has an effect on political attitudes.
In the case of the war in Ukraine, we see that the main purpose of Russian propaganda aimed at foreign audiences is to try to lower the support for Ukraine in the West, so that Western governments won’t send any more weapons. And we can see in one of our surveys that there is a certain effect of Russian propaganda consumption. Many communities have become less supportive of sending weapons to Ukraine. For various reasons, of course, but once Russian propaganda is feeding into those other existing narratives, it can potentially amplify this effect.
That’s what makes studying the effects so difficult in propaganda studies. It's not a one-off event, it extends over many years. In experimental studies you can see direct relationships, but that’s not the same as the real world, when people interact in social environments, in specific circumstances and are exposed to different topics.
What effect has generative AI had on the dynamics of disinformation?
EK: This is still an ongoing and very new field of study, so it is difficult to answer this just yet.
From the start, our concerns have been that generative AI would produce a lot of misinformation. This technology is not meant to verify facts, it's just meant to predict the most likely word and spit out a text or an image that reads, sounds, or looks good – which it does very well. The fears have been, that we would suddenly be confronted with a lot of misinformation produced by chatbots, and that people would believe it and trust it.
The second concern has been that rogue actors could use generative AI to produce disinformation at scale, and we would be dealing with more and more high-quality disinformation campaigns.
The third concern is that authoritarian countries would try to control the narrative through these applications. And we see that in China, for example, there is already quite some control of the narrative and discourse through new technologies.
Have these concerns been justified? Has the amount of disinformation risen since the launch of ChatGPT?
EK: It's very difficult to study the entire internet, so we don’t know the overall amount of disinformation out there. In general, we do see that there is more and more information produced every day, so if we extrapolate that there should be also more disinformation. But I don't necessarily think that the sheer volume is a problem per se. The problem is who is exposed to disinformation, in which context and whether or not they're prepared to deal with it.
I don't think we can realistically control how much information is out there, we can’t even control individual exposure to it. And even if we could, it would be quite an intervention. I'm not sure that’s a good idea.
You’ve also researched how generative AI can help combat disinformation. What have you found out?
EK: Yes. Apart from all these concerns, we wanted to see if we can actually use Large Language Models – the technology that chatbots use – to tackle the problem of disinformation. They could help in analyzing large amounts of text and detecting disinformation. But there are challenges with that. We still have problems with a lack of reliability and reproducibility of results, because there's a lot of randomisation in the way chatbots generate their outputs.
Plus, a human does a much better job at understanding the context and the hidden meaning behind phrases. And even while chatbots are doing okay at identifying existing disinformation, we cannot tell how well they will work on unknown or new stories – I suspect they won’t because they weren’t designed to do that. So, they could be used for an initial analysis of the text in order to filter out certain content, but only before it is sent to human beings for a proper analysis.
Chatbots themselves can also help combat disinformation by tweaking their guardrails – in setting up rules in their system that control how the machine responds to certain topics or questions in certain contexts. This can manage those information environments in some way.
What else is being done to fight misinformation, by tech companies as well as political actors?
EK: Platforms are introducing some ways of dealing with disinformation. Search engines, for example, curate information based on the source, and they definitely prioritize reputable mainstream sources of information. And we see that they're quite good at filtering out misinformation in English – but not in many other languages. In the Russian language, there are much more propaganda sources than sources debunking misinformation, so this main strategy of prioritizing specific sources doesn’t work very well.
I also suspect that search engines are not even that interested in managing information environments in Russian or other languages. Their focus seems to be on English and a few other mainstream European languages. The short answer is that they are doing something, but it's not enough.
The same goes for content moderation. It works, but we believe that the platforms are not investing enough, especially not in all languages. How they deal with this problem is unequally distributed, which means that some communities – especially the most vulnerable ones – are not receiving enough of those efforts.
If we're talking about exposure to misinformation when people are using chatbots, ChatGPT is also doing something. They've identified what they think are sensitive political and social topics and have tweaked their guardrails, so that some kind of disclaimer appears, or the output is something like “I don't know” or “Please double check, I cannot tell you.”
Then there is debunking of course, which is done by many NGOs and journalistic organizations, and I think this is very important work. But they’re mainly focused on very prominent topics, and they lack the capacity to work on other issues. Since misinformation is such a fast-evolving problem, we get narratives popping up very quickly and we don't have enough resources to tackle them in real time.
Are there other ways to tackle disinformation that don’t rely on chasing after these false narratives?
EK: The best way to tackle the problem would be to prevent this initial exposure to disinformation. But that is very difficult, since there is such an abundance of content out there, and we don't even know all the corners of the online world where disinformation takes place. We know some – but we cannot control all of them, and I'm not even sure we should. That's why there are also strategies of pre-bunking, trying to teach people how to not be susceptible to disinformation in the first place. This is more of an educational process on how to deal with this abundance of information.
Media literacy programs in general are really important, but they usually happen in more privileged societies. We don't have enough experts and the efforts are unequally distributed. It would be great to have one well-coordinated media literacy program on the EU level that actually reaches vulnerable communities. That would be much more effective than what is happening right now. But we’re also dealing with a fine line: It's important to teach people how to deal with different media sources and how to find reliable information, but without involving specific narratives, or telling people what to think. In reality, this is challenging.
At the end of the day, we haven't yet figured out what kind of information environment we want to have. To what extent we want to manage it and to what extent we want to educate people to be able to navigate it. The current modes aren’t very sustainable, so this is something we as a society need to figure out. We shouldn’t blame it all just on the technology.
Thank you for the conversation!
Care to learn more?
Study Makhortykh, M., Sydorova, M., Baghumyan, A., Vziatysheva, V., & Kuznetsova, E. (2024). Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-154
Elizaveta Kuznetsova leads the Weizenbaum research group “Platform Algorithms and Digital Propaganda.” She works at the intersection of Communication Studies and International Relations. Her primary focus is on digital propaganda, social media platforms, and international media. She holds a PhD in International Politics from City University of London.
She was interviewed by Leonie Dorn
This interview is part of a special focus "Solidarity in the Networked Society." Scientists from the Weizenbaum Institute provide insights into their research on various aspects of digital democracy and digital participation through interviews, reports, and dossiers.