December 05, 2025
PARIS: An oversight board created by Facebook to review content-moderation decisions trumpeted improved transparency and respect for people’s rights in a survey of their first five years of work on Thursday, while acknowledging “frustrations” with their arm’s-length role.
Facebook – since renamed Meta – announced the Oversight Board, often referred to as the group’s “supreme court”, at a 2018 low point of public trust in the tech giant.
The Instagram and WhatsApp owner’s image had been tarnished by episodes like the Cambridge Analytica data-breach scandal and disinformation and misinformation around crucial public votes such as Brexit and the 2016 US presidential election.
The Oversight Board began work in 2020, staffed with prominent people including academics, media veterans and civil society figures.
It reviews selected cases where people have appealed against Meta’s moderation decisions, issuing binding rulings on whether the company was right to remove content or leave it in place.
It also issues non-binding recommendations on how lessons from those cases should be applied to updating the rules for billions of users on Meta’s platforms Facebook, Instagram and Threads.
Over the past five years, the board has secured “more transparency, accountability, open exchange and respect for free expression and other human rights on Meta’s platforms”, it said in a report.
The board added that Meta’s oversight model – unusual among major social networks – could be “a framework for other platforms to follow”.
The board, which is funded by Meta, has legal commitments that bosses will implement its decisions on individual pieces of content.
But the company is free to disregard its broader recommendations on moderation policy.
“Over the last five years, we have had frustrations and moments when hoped-for impact did not materialise,” the board wrote.
Some outside observers of the tech giant are more critical.
“If you look at the way that content moderation has changed on Meta platforms since the establishment of the board, it’s rather got worse,” said Jan Penfrat of Brussels-based campaigning organisation European Digital Rights (EDRi).
Today on Facebook or Instagram, “there is less moderation happening, all under the guise of the protection of free speech,” he added.
Effective oversight of moderation for hundreds of millions of users “would have to be a lot bigger and a lot faster”, with “the power to actually make systemic changes to the way Meta’s platforms work”, Penfrat said.
The limits of the board’s influence were highlighted when chief executive Mark Zuckerberg axed Meta’s US fact-checking programme in January.
That scheme had employed third-party fact checkers, AFP among them, to expose misinformation disseminated on the platform.
In April, the Oversight Board said the decision to replace it with a system based on user-generated fact checks had been made “hastily”, and recommended Meta study more closely the effectiveness of the new setup.
Looking ahead, “the Board will be widening its focus to consider in greater detail the responsible deployment of AI tools and products,” the report said.
Zuckerberg has talked up plans for deeper integration of generative artificial intelligence into Meta’s products, calling it a potential palliative to Western societies’ loneliness epidemic.
But 2025 has also seen mounting concern over the technology, including a series of stories of people killing themselves after extended conversations with AI chatbots.
Many such “newly emerging harms… mirror harms the Board has addressed in the context of social media”, the Oversight Board said, adding that it would work towards “a way forward with a global, user rights-based perspective”.