Euronews Next looks at how successful or not the world was at protecting voters from online disinformation during the historic 2024 election year.
Disinformation experts say that many countries were able to limit a wide range of fake or misleading information as voters across the globe went to the polls this year, with artificial intelligence (AI)-generated content having less of an impact than expected.
More than 1.6 billion people cast a ballot in over 70 elections throughout the year, making 2024 the world’s biggest election year in human history, according to the International Institute for Democracy and Electoral Assistance (IDEA).
Europe alone saw the EU elections, national elections in Finland, Portugal, France, the UK, Austria, Belgium, Bulgaria, Georgia, Hungary, Iceland, Ireland, Lithuania, Moldova, Slovakia, and Romania, and the first round of a presidential election in Croatia.
We take a look at how well governments around the world were able to contain disinformation and what impact, if any, it had on the results as 2025 dawns.
AI less of a factor than expected
On the eve of the biggest election year in history, experts in the disinformation field were concerned about the role that generative AI would play in the spread of election-related disinformation.
But when AI was used in this year’s campaigns, it was not in the way that experts predicted.
There were fewer deepfakes created with malicious intent, and instead more AI-generated videos that either humiliated or glorified candidates, experts told Euronews Next.
There were a few isolated examples of AI being used to create deepfakes to mislead, such as AI-generated audio of a national anthem in support of France’s Jordan Bardella, fake video advertisements with former UK prime minister Rishi Sunak, or AI-generated images of Taylor Swift and her fans supporting US President-elect Donald Trump, but they were quickly recognised and debunked.
The concern from experts ahead of this year’s elections was that more advanced, authentic, and harder-to-detect AI-generated content could be used by decision-makers to trick voters, but that did not happen, according to Giorgos Verdi, a policy fellow with the European Council on Foreign Relations (ECFR).
“AI was a tool that emerged just about a couple of years ago… one can argue that these tools are not ripe enough,” Verdi said.
“As these tools start to develop, we may see those lines blurring,” he added.
Other methods of spreading disinformation are being used more often than AI and are still effective, said Licinia Güttel with the Oxford Internet Institute.
Some countries also had legislation in place to limit the spread of AI-generated disinformation.
In the EU, some Big Tech and AI companies signed a Code of Practice on Disinformation (CoP) ahead of elections where they committed to putting in place a rapid response system to identify and stop fake information from spreading, according to the EU’s Transparency Center for the DisinformationCode.
The stronger the democracy, the stronger the protections
In general, countries without democracy or with newer, more fragile democracies had higher rates of misinformation throughout the election year, Güttel said.
Russians, who were voting in their presidential election this March, saw more pro-Putin narratives leading up to the vote even though it was “guaranteed” that he would win the election, Güttel added.
Paolo Cesarini, programme director of the European Digital Media Observatory (EDMO), said there’s a whole “ecosystem” of support in more established democracies like the EU that was quite successful in fighting foreign interference in the form of disinformation ahead of elections.
Added legislative protections like the Digital Services Act (DSA) and the Digital Markets Act (DMA) forced some large online platforms, like TikTok, Instagram, Facebook and others to do systemic risk assessments to evaluate how disinformation spreads on their platforms and find ways to mitigate them, ECFR’s Verdi said.
Finland’s ‘resilience’ in the wake of hybrid campaign allegations
In January, Finland went to the polls to choose their next president.
While the lead-up to voting day was largely convivial, accusations came from the Finns Party of a “hybrid influence” operation on social media to downplay their candidates.
Accounts making these claims also tried to discredit Yle, the country’s public broadcaster, and shared anti-immigrant, anti-Muslim content, according to Euronews reporting at the time.
Faktabaari, a fact-checking Finnish NGO, also found that a “funnelling effect,” on YouTube directed many Finnish voters towards far-right videos during the election campaign that “potentially influenc[ed] their perspectives”.
Paula Gori, EDMO’s secretary-general, said that the country is known for its resilience against disinformation because the government launched a robust anti-fake news initiative in schools as far back as 2014. The Nordic country also regularly tops the charts on media literacy.
So, Gori believes the systems Finland has in place that make it a society that is “resilient” to disinformation and are a good example of what worked in this year’s election cycle.
Romania: social media disinformation gone wrong
The recent election in Romania saw a shocking first-round winby Călin Georgescu, a figure from the fringes of the country’s politics.
The elections were cancelled over intelligence reports of aggressive meddling by Moscow through an anti-Western propaganda campaign, largely on TikTok, to change the vote.
Moscow maintains it did not meddle in the election, the BBC reported.
EDMO’s Cesarini says that what’s going on in Romania is confirmation that putting legislative pressure on social media companies is the right play given the surge of disinformation coming from those platforms.
“What Georgescu did is exactly to replicate what the Russians… [have] been doing for years by creating false accounts, by buying false engagement,” Cesarini said.
But, according to Güttel from the Oxford Internet Institute, it’s too early to know whether what happened in Romania is something that could happen elsewhere.
To her, the country was already vulnerable to this type of result because of its low levels of media literacy and deep mistrust in traditional media.
In 2022, Romania was ranked second-to-last on media literacy in the EU and 29th out of 38 on trust in traditional media, according to the non-profit EU Disinfo Lab.
Looking to the future
The next year will be another big one for elections, with national elections expected in Belarus, Albania, Czech Republic, Germany, Kosovo, Norway, Poland and a second try at the Romanian elections.
While AI didn’t necessarily play as large a role as thought this year, Verdi believes it's something that experts still need to regulate because the technology will continue to improve.
Domestic disinformation continues well beyond an election cycle too, Cesarini said, so the long-term strategy of the malicious actors creating it needs to be remembered.
“It is a lesson not to let the guard down now, but to move more forcefully and with more energy… to detect these campaigns,” he said.