Fighting disinformation gets harder, just when it matters most
In February 2024 America’s State Department revealed that it had uncovered a Russian operation designed to discredit Western-run health programmes in Africa. The operation included spreading rumours that dengue fever, a mosquito-borne illness, was created by an American NGO, and that Africans who received treatment were being used as test subjects by American military researchers. The campaign, based around a Russian-funded news site, was intended to sow division and harm America’s reputation. Discouraging Africans from seeking health care was collateral damage along the way.
The campaign was brought to light through the work of the Global Engagement Centre, an agency in the US State Department. Once a false story is detected, the agency works with local partners, including academics, journalists and civil-society groups to spread the word about the source—a technique known as “psychological inoculation” or “pre-bunking”. The idea is that if people are made aware that a particular false narrative is in circulation, they are more likely to view it sceptically if they encounter it in social-media posts, news articles or in person.
Pre-bunking is just one of many countermeasures that have been proposed and deployed against deceptive information. But how effective are they? In a study published last year, the International Panel on the Information Environment (IPIE), a non-profit group, drew up a list of 11 categories of proposed countermeasures, based on a meta-analysis of 588 peer-reviewed studies, and evaluated the evidence for their effectiveness. The measures include: blocking or labelling specific users or posts on digital platforms; providing media-literacy education (such as pre-bunking) to enable people to identify misinformation and disinformation; tightening verification requirements on digital platforms; supporting fact-checking organisations and other publishers of corrective information; and so on.
The IPIE analysis found that only four of the 11 countermeasures were widely endorsed in the research literature: content labelling (such as adding tags to accounts or items of content to flag that they are disputed); corrective information (ie, fact-checking and debunking); content moderation (downranking or removing content, and suspending or blocking accounts); and media literacy (educating people to identify deceptive content, for example through pre-bunking). Of these various approaches, the evidence was strongest for content labelling and corrective information.
Such countermeasures are of course already being implemented in different ways around the world. On social platforms, users can report posts for containing “false information” on Facebook and Instagram, and “misinformation” on TikTok, so that warning labels can be applied. X does not have such a category, but allows “Community notes” to be added to problematic posts to provide corrections or context.
Lies, damned lies and social media
In many countries academics, civil-society groups, governments and intelligence agencies flag offending posts to tech platforms, which also have their own in-house efforts. Meta, for example, co-operates with about 100 independent fact-checking outfits in more than 60 languages, all of which are members of the International Fact-Checking Network, established by the Poynter Institute, an American non-profit group. Various organisations and governments work to improve media literacy; Finland is famed for its national training initiative, launched in 2014 in response to Russian disinformation. Media literacy can also be taught through gaming: Tilt Studio, from the Netherlands, has worked with the British government, the European Commission and NATO to create games that help identify misleading content.
To be able to fight disinformation, academics, platforms and governments must understand it. But research on disinformation is limited in several key respects—studies tend to look only at campaigns in a single language, or on a single subject, for instance. And most glaringly of all, there is still no consensus on the real-life impact of exposure to deceptive content. Some studies find little evidence linking disinformation to the outcomes of elections and referendums. But others find that Kremlin talking points are repeated by right-wing politicians in America and Europe. Opinion polls also find that enough European citizens tend to agree with Russian lines of disinformation to suggest that Russia’s campaign to sow doubt about the truth might be working.
Regulators are stepping in to try to plug the gap—at least in Europe. The EU’s Digital Services Act (DSA), which came into force in February, requires platforms to make data available to researchers who are working on countering “systemic risk” to society (Britain’s equivalent, the Online Safety Act, has no such provision). Under the new EU rules, researchers can submit proposals to the platforms for review. But so far, few have been successful. Jakob Ohme, a researcher at the Weizenbaum Institute for Networked Society, has been collecting information from colleagues on the outcomes of their requests. Of roughly 21 researchers he knows of who have submitted proposals, only four have received data. According to a European Commission spokesperson, platforms have been asked to supply information to show that they are complying with the act. Both X and TikTok are currently under investigation over whether they have failed to supply data to researchers without undue delay. (Both companies say they comply, or are committed to complying, with the DSA. X withdrew from the EU’s voluntary code to fight disinformation last year.)
In America, however, efforts to fight disinformation have become caught up in the country’s dysfunctional politics. Researchers believe that fighting disinformation requires a co-ordinated effort by tech platforms, academics, government agencies, civil-society groups and media organisations. But in America any co-ordination of this kind has come to be seen, particularly by those on the right, as evidence of a conspiracy between all those groups to suppress particular voices and viewpoints. When false information about elections and covid-19, posted by Donald Trump and Marjorie Taylor Greene, was removed from some tech platforms, they and other Republican politicians complained of censorship. A group of large companies that refused to advertise on right-leaning platforms where disinformation abounds were threatened with antitrust investigations.
Researchers studying disinformation have been subjected to lawsuits, attacks from political groups and even death threats. Funding has also diminished. Faced with these challenges, some researchers say they have stopped alerting platforms about suspected suspicious accounts or posts. An ongoing lawsuit, Murthy v Missouri, has led American federal agencies to suspend their sharing of suspected misinformation with tech platforms—although the FBI has reportedly resumed sending social-media companies briefings in the past few weeks.
All this has had a chilling effect on the field, just as concern is mounting about the potential for disinformation to influence elections around the world. “It is difficult to avoid the realisation that one side of politics—mainly in the US but also elsewhere—appears more threatened by research into misinformation than by the risks to democracy arising from misinformation itself,” wrote researchers recently in Current Opinion in Psychology.
The tide may be turning, however. In the past few weeks, during oral arguments about the Murthy v Missouri case, most of the justices on America’s Supreme Court expressed support for the efforts of governments, researchers and social-media platforms to work together to combat disinformation. America has also announced an international collaboration with intelligence agencies in Canada and Britain to curb foreign influence on social media by “going beyond ‘monitor-and-report’ approaches”, although the details of any new strategies have not been disclosed. And if the EU’s DSA regulations can open the way for tech companies to share data with researchers in Europe, researchers elsewhere may benefit too.