Rotimi Akapo and Adeyemi Owoade of Advocaat Law Practice explore the dismantling of Meta’s third-party fact-checking program and its impact on users of its platforms across the world.
The evolving intersection of technology with free expression has never been more pronounced than it is today. In a landmark announcement, Meta, the parent company of Facebook, Instagram, and Threads, unveiled sweeping changes to its content moderation policies. These reforms may represent a significant shift in how global platforms manage free expression, and their implications extend across legal, regulatory, political and social landscapes.
This article explores these changes, the dismantling of Meta’s third-party fact-checking program, the merits and demerits of the decision and the impact it may have on other users across the world who will come across posts originating from the affected region.
BACKGROUND
In his 2019 Georgetown University address, Meta CEO Mark Zuckerberg articulated a vision for free expression as the cornerstone of societal progress. He warned against the dangers of prioritizing political outcomes over individual voices, arguing that empowering individuals through free speech often disrupt entrenched power structures. This principle appears to underpin Meta’s latest policy revisions, aimed at recalibrating the balance between open dialogue and responsible content governance.
Meta’s platforms, home to billions of users, have faced criticism for their complex content moderation systems, which often suppress legitimate political discourse and stifle harmless speech. The company’s acknowledgement of these shortcomings —notably, that up to 20% of enforcement actions may be erroneous—signals a willingness to embrace transparency and recalibrate. A centrepiece of Meta’s announcement is the decision to end its third-party fact-checking program in the United States. Introduced in 2016 to combat viral misinformation, the program devolved into a source of contention due to perceived biases among fact-checkers. This criticism highlights a universal challenge in content moderation: the subjectivity inherent in determining “truth.”
Meta has now decided to pivot to a Community Notes system, inspired by X (formerly Twitter), which is believed to democratise the fact-checking process. This system enables users to flag and contextualise misleading posts collaboratively, requiring consensus from contributors with diverse perspectives before attaching notes to content. This approach is believed to mitigate the risk of institutional bias and shifts content governance closer to the community it serves. The changes are limited to the U.S., avoiding the stricter regulatory environment of the European Union, where the 2023 Digital Services Act requires platforms like Facebook to address illegal content or face fines of up to 6% of global revenue. The European Commission, which began investigating X’s community notes system in late 2023, has stated it will closely monitor Meta’s compliance.

Rotimi Akapo
The changes are limited to the U.S., avoiding the stricter regulatory environment of the European Union, where the 2023 Digital Services Act requires platforms like Facebook to address illegal content or face fines of up to 6% of global revenue. The European Commission, which began investigating X’s community notes system in late 2023, has stated it will closely monitor Meta’s compliance.
PROS AND CONS OF REMOVING THIRD-PARTY FACT-CHECKING
The decision to eliminate third-party fact-checkers holds a mix of opportunities and potential problems for the American public. On the one hand, the absence of professional factcheckers opens the medium to freer expression, allowing users to communicate with less inhibition. Additionally, the Community Notes system democratizes content governance by transferring decision-making powers to a diverse user base, promoting transparency and inclusivity. This shift reduces claims of institutional bias, as eliminating centralized fact-checking could encourage trust among users.
However, this policy shift is not without significant drawbacks. The absence of professional fact-checking heightens the risk of misinformation spreading unchecked, creating an environment where false narratives can thrive. Past incidents, such as those in Myanmar and Sri Lanka, underscore how unchecked disinformation on Meta’s platforms has contributed to violence, civil unrest, and loss of life. Similar risks are present in the United States, where disinformation has fuelled events like the January 6, 2021, attack on Capitol Hill. Such unchecked content could exacerbate societal divisions and erode trust in reliable information sources.
In his 2019 Georgetown University address, Meta CEO Mark Zuckerberg articulated a vision for free expression as the cornerstone of societal progress. He warned against the dangers of prioritizing political outcomes over individual voices, arguing that empowering individuals through free speech often disrupt entrenched power structures. This principle appears to underpin Meta’s latest policy revisions, aimed at recalibrating the balance between open dialogue and responsible content governance.
Globally, the removal of third-party fact-checking offers the potential for scalable models of content governance. The community-driven system, like Community Notes, is adaptable to various cultural contexts, increasing engagement in shaping the platform’s informational ecosystem. Yet challenges persist. The risk of cultural bias in these systems could marginalize minority viewpoints, as dominant cultural or regional perspectives might overshadow others. Additionally, in regions with limited digital literacy or resources, participation in community-driven governance could be inequitable, further entrenching disparities.
CHALLENGES OF ADOPTING COMMUNITY NOTES IN OTHER PARTS OF THE WORLD
Although this policy change is limited to the United States of America, it must be noted that the implementation of Community Notes in emerging markets presents unique challenges. In areas with low levels of digital literacy, users may struggle to engage meaningfully with content moderation processes. Ensuring cultural sensitivity is another significant hurdle, as notes must reflect diverse perspectives without reinforcing majority biases. Furthermore, infrastructure limitations, such as unreliable internet access and a lack of technological tools, may impede widespread adoption. Regulatory barriers also play a role, as some governments might resist decentralized fact-checking systems in favour of maintaining direct control over information dissemination.
For example, in Nigeria, it will be hard to operate the community notes system. Regulators, such as the National Information Technology Development Agency (NITDA) and the Nigerian Communications Commission (NCC), have long grappled with balancing free expression and content regulation amidst unique challenges such as the proliferation of misinformation during elections and limited enforcement mechanisms. These issues are compounded by the lack of comprehensive digital literacy among the population, which hampers efforts to curb the spread of false narratives. Recall that in June 2021, Nigeria suspended Twitter’s operations after the platform deleted a tweet by President Muhammadu Buhari, citing violations of its policies. The Nigerian government justified the ban by alleging that Twitter was being used for activities capable of undermining Nigeria’s corporate existence. Importantly, NITDA’s Code of Practice for Interactive Computer Service Platforms and Internet Intermediaries (CoP for ICSP/II), introduced in September 2022, places explicit responsibilities on platforms to reduce online misinformation and disinformation. This code potentially conflicts with Meta’s approach of relying on community-driven fact-checking, which could be perceived as shifting responsibility away from the platform. In the event of civil unrest arising from unchecked user-generated content, such a defence may not be viable under NITDA’s guidelines, highlighting a critical compliance challenge for Meta and similar platforms.

Adeyemi Owoade
Recall that in June 2021, Nigeria suspended Twitter’s operations after the platform deleted a tweet by President Muhammadu Buhari, citing violations of its policies. The Nigerian government justified the ban by alleging that Twitter was being used for activities capable of undermining Nigeria’s corporate existence.
IMPACT ON OTHER STAKEHOLDERS IN THE DIGITAL ECONOMY
Beyond platform users, Meta’s policy shift affects a broader array of stakeholders. For political and democratic literacy, the absence of traditional fact-checking mechanisms could weaken efforts to combat propaganda and misinformation, potentially undermining democratic principles worldwide. This could lead to more fragmented political discourse and heightened societal polarization.
Governments and regulators are also impacted, as the risks associated with misinformation may prompt stricter oversight and the evolution of regulatory frameworks to address decentralized content moderation. Additionally, media and journalistic organizations face mounting pressures. Traditional outlets may struggle to maintain their authority and credibility in a landscape increasingly dominated by user-generated content, while fact-checking organizations may see their influence, revenue and viability diminished.
About Authors: ROTIMI AKAPO rotimi.akapo@advocaat-law.com and ADEYEMI OWOADE adeyemi.owoade@advocaat-law.com work with Advocaat Law Practice.
Source of Article