top of page

THE STRATCOMM INSIGHTS

Fact-Checking Under Pressure

  • Admin
  • May 29
  • 5 min read

Welcome to The StratComm Insights! 


In this edition, we explore the challenges fact-checking is facing today and what it means for strategic communication. 


When Verification Fails 


If the past year has taught us anything, it is that truth is more fragile than we like to admit — and fact-checking, long considered a safeguard of truth in democratic societies, is finding itself questioned, politicised and undermined by asymmetries in global media power. 


In today’s hyper-polarised, AI-accelerated and platform-fragmented environment, where disinformation moves faster, and legitimacy is just as contested as the facts themselves, fact-checking is not just struggling to keep up. It is struggling to stay relevant. What used to be a response to disinformation is now perceived, by some, as censorship, and by others, as obsolete. 


Declining Trust in Institutions 


Public trust in mainstream media and EU institutions continues to decline. Across Europe, more and more people are tuning out, especially younger generations and those already feeling left out of political conversations. 


At the same time, our digital spaces are louder, faster and more crowded than ever. And in that space, that trust vacuum, disinformation does not just circulate, it thrives — especially when those narratives come wrapped in emotional, relatable content. 


Fact-checkers are not immune. In fact, their association with "official" institutions can often weaken their perceived neutrality. Instead of functioning as arbiters of truth, they are frequently seen by sceptical audiences as defenders of elite consensus — part of the problem rather than the solution. 


Especially when someone’s worldview feels under attack, even the most careful, well-sourced fact-check can be read as bias — or worse, as censorship.


The Limits of Traditional Fact-Checking 


Psychological Barriers and the Bias Debate 

The issue is not only structural — it is psychological. Audiences do not evaluate information objectively; they interpret it through confirmation bias and motivated reasoning. This means that corrections, no matter how solid and backed by evidence, can backfire if they threaten someone’s worldview. Not because it is wrong, but because it feels wrong to the person reading it. And once that happens, trust disappears. 

At the same time, we are also watching a shift in how fact-checkers are perceived. In more and more cases, they are not seen as neutral referees — they are painted as political actors, especially by far-right voices who frame any challenge to their narrative as censorship. 

Meta’s decision to drop third-party fact-checking in the U.S. didn’t happen in a vacuum. CEO Mark Zuckerberg echoed the language of political culture wars, saying fact-checkers “destroyed more trust than they created”. Using words like “bias” and “censorship” has pushed fact-checking from the sidelines of democratic debate straight into the middle of a polarised battleground, with even well-respected organisations now spending more time defending their neutrality rather than correcting disinformation. 


Reactive Models in a Fast-Paced Environment 

The traditional fact-checking model — identify, verify, correct — is too slow for the speed and emotional appeal of disinformation. 


In breaking news scenarios or terrorist attacks, where disinformation spreads within minutes, the time lag between disinformation and correction makes it hard for even the most diligent teams to shape early perceptions. 


Structural Vulnerabilities 


If we take a closer look at what has been happening across major platforms, it is hard not to feel like we are watching a quiet retreat. When Elon Musk took Twitter, now X, he promised a new era of “free speech”. What followed was a progressive dismantling of its trust and safety infrastructure — mass layoffs, reinstated extremist accounts and a wave of algorithmic changes that reshaped what users saw (and did not see). And while the platform insisted it was empowering its community, the results have been messy and easy to manipulate. 


Community Notes — X’s crowdsourced fact-checking tool — was presented as the solution. The idea was to let users add context to viral posts and only show those notes when people from “different viewpoints” agree. In practice, only a small fraction (8.3%) of notes ever become visible. It resulted in a slow, often inconsistent mechanism that has trouble keeping pace with the speed of disinformation. Even so, notes citing professional fact-checkers appear faster and gain more traction than others — evidence that credibility still matters, even in crowdsourced systems. 


Meta was next. In January 2025, Zuckerberg announced the company would follow a similar path, ending its Third-Party Fact-Checking Programme in the U.S. and rolling out its own community-based system. Like X, it framed the move as a shift toward user empowerment and away from institutional bias. But it also meant fewer professional guardrails, less oversight, and more space for viral narratives to run unchecked. 


All of this is happening while platform APIs get locked down, content moderation teams are downsized, and election monitoring tools are pulled. Researchers and verification teams are being shut out of the process.

These developments are creating new challenges for researchers and verification teams working to monitor online narratives. At the same time, algorithmic inconsistencies, such as variations in hashtag visibility or account amplification, continue to illustrate systemic gaps in platform governance. 


Underneath it all is a business model that has not changed. The same platforms that say they are prioritising safety are still rewarding engagement driven by polarising emotional content, and even misinformation, because that is what keeps people clicking. When the rules keep shifting and moderation is left to crowd consensus, what counts as “truth” becomes increasingly dependent on who shows up, how loud they are and what the algorithm decides to show. 


While European regulators have ramped up enforcement under the Digital Services Act, platforms, this is still the reality we are facing — watching platforms redefine how truth travels online. Because in an environment where speed, scale and visibility are everything, the old playbook — built for a slower, more contained information environment — is just not enough. 


Independent Fact-Checkers Under Pressure 


Independent organisations such as EU DisinfoLab, Correctiv and AFP Fact Check and Debunk.org continue to operate, however they are under immense pressure. 

  • EU DisinfoLab continues to expose disinformation networks but faces growing pressure, including online harassment, cyber-threats and attempts to undermine its credibility. 

  • CORRECTIV uncovered a far-right plan for mass deportations, sparking massive pro-democracy protests. But the organisation also received threats, lawsuits and disinformation attacks for its work. 

  • AFP Fact Check, operating across regions and languages, continues to publish consistently, but like others, faces constraints in scale, staffing, and platforms are now deprioritising their work. 


These organisations navigate huge challenges: staying impartial, remaining operationally agile and maintaining public legitimacy; and in today’s environment, that balance is increasingly difficult to sustain. 


Strategic Implications for Communicators 

Fact-checking today is not just about “getting the facts right”. It is about credibility, speed, and strategic positioning in a contested information space and the decline of institutional fact-checking is a strategic communication vulnerability with implications for security, election integrity and crisis response. 


Without credible, timely mechanisms to reinforce factual consensus: 

  • Narrative control becomes more volatile. 

  • Disinformation operations face less resistance. 

  • Polarisation accelerates, especially during elections, crises and hybrid threat scenarios. 


If fact-checking is to retain relevance, it must be reimagined: integrated into anticipatory responses and backed by tools that meet the pace and psychology of today’s information environment. 


This means: 

  • Investing in anticipation of narrative battles around expected events and proactive messaging. 

  • Enhancing platform accountability and algorithmic transparency

  • Supporting local, multilingual fact-checking ecosystems, especially in elections and conflict zones. 

  • Combining verification capabilities within EU StratCom units and Preparedness Union strategies


Ignite Your Inspiration 

🔹Cognitive Biases in Fact-Checking and Their Countermeasures: A Review by Michael Soprano, Kevin Roitero, David La Barbera, Davide Ceolin, Damiano Spina, Gianluca Demartini, Stefano Mizzaro


 
 
 

Comments


bottom of page